Test Report: KVM_Linux_crio 19446

                    
                      68089f2e899ecb1db727fde03c1d4991123fd325:2024-08-14:35784
                    
                

Test fail (29/318)

Order failed test Duration
34 TestAddons/parallel/Ingress 151.84
36 TestAddons/parallel/MetricsServer 321.03
45 TestAddons/StoppedEnableDisable 154.28
164 TestMultiControlPlane/serial/StopSecondaryNode 141.81
166 TestMultiControlPlane/serial/RestartSecondaryNode 59.12
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 417.38
171 TestMultiControlPlane/serial/StopCluster 141.52
231 TestMultiNode/serial/RestartKeepsNodes 321.19
233 TestMultiNode/serial/StopMultiNode 141.34
240 TestPreload 336.92
248 TestKubernetesUpgrade 402.84
320 TestStartStop/group/old-k8s-version/serial/FirstStart 289.57
345 TestStartStop/group/no-preload/serial/Stop 139
348 TestStartStop/group/embed-certs/serial/Stop 139.07
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.16
352 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
353 TestStartStop/group/old-k8s-version/serial/DeployApp 0.46
354 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 113.46
356 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
362 TestStartStop/group/old-k8s-version/serial/SecondStart 717.97
363 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544
364 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.14
365 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.03
366 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.36
367 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 444.43
368 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 442.47
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 340.6
370 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 137.05
x
+
TestAddons/parallel/Ingress (151.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-521895 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-521895 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-521895 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0036eca6-d67d-4be0-8ac1-c9992f0e271c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0036eca6-d67d-4be0-8ac1-c9992f0e271c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003569313s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-521895 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.059822067s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-521895 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.170
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-521895 addons disable ingress-dns --alsologtostderr -v=1: (1.214770656s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-521895 addons disable ingress --alsologtostderr -v=1: (7.680912163s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-521895 -n addons-521895
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-521895 logs -n 25: (1.097734333s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-074409                                                                     | download-only-074409 | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC | 14 Aug 24 16:10 UTC |
	| delete  | -p download-only-495471                                                                     | download-only-495471 | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC | 14 Aug 24 16:10 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-629887 | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC |                     |
	|         | binary-mirror-629887                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46569                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-629887                                                                     | binary-mirror-629887 | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC | 14 Aug 24 16:10 UTC |
	| addons  | disable dashboard -p                                                                        | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC |                     |
	|         | addons-521895                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC |                     |
	|         | addons-521895                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-521895 --wait=true                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC | 14 Aug 24 16:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | addons-521895                                                                               |                      |         |         |                     |                     |
	| ip      | addons-521895 ip                                                                            | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-521895 ssh curl -s                                                                   | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-521895 ssh cat                                                                       | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | /opt/local-path-provisioner/pvc-230f268c-e9fb-47c8-a734-e535e5b8b6a9_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-521895 addons                                                                        | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-521895 addons                                                                        | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | -p addons-521895                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | addons-521895                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | -p addons-521895                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-521895 ip                                                                            | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:16 UTC | 14 Aug 24 16:16 UTC |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:16 UTC | 14 Aug 24 16:16 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:16 UTC | 14 Aug 24 16:16 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 16:10:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 16:10:06.091073   21883 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:10:06.091202   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:10:06.091212   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:10:06.091217   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:10:06.091439   21883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:10:06.092072   21883 out.go:298] Setting JSON to false
	I0814 16:10:06.092936   21883 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3150,"bootTime":1723648656,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:10:06.092990   21883 start.go:139] virtualization: kvm guest
	I0814 16:10:06.095031   21883 out.go:177] * [addons-521895] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:10:06.096420   21883 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:10:06.096420   21883 notify.go:220] Checking for updates...
	I0814 16:10:06.097937   21883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:10:06.099288   21883 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:10:06.100579   21883 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:10:06.101794   21883 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:10:06.103045   21883 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:10:06.104357   21883 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:10:06.134990   21883 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 16:10:06.136076   21883 start.go:297] selected driver: kvm2
	I0814 16:10:06.136097   21883 start.go:901] validating driver "kvm2" against <nil>
	I0814 16:10:06.136108   21883 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:10:06.136812   21883 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:10:06.136886   21883 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 16:10:06.151588   21883 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 16:10:06.151640   21883 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 16:10:06.151879   21883 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:10:06.151953   21883 cni.go:84] Creating CNI manager for ""
	I0814 16:10:06.151969   21883 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 16:10:06.151981   21883 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 16:10:06.152044   21883 start.go:340] cluster config:
	{Name:addons-521895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-521895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:10:06.152159   21883 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:10:06.153813   21883 out.go:177] * Starting "addons-521895" primary control-plane node in "addons-521895" cluster
	I0814 16:10:06.155036   21883 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:10:06.155072   21883 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:10:06.155087   21883 cache.go:56] Caching tarball of preloaded images
	I0814 16:10:06.155184   21883 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 16:10:06.155198   21883 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 16:10:06.155566   21883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/config.json ...
	I0814 16:10:06.155590   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/config.json: {Name:mk2c74c8b25cb0d239f5c19085340188d3cc7de6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:06.155739   21883 start.go:360] acquireMachinesLock for addons-521895: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 16:10:06.155795   21883 start.go:364] duration metric: took 40.446µs to acquireMachinesLock for "addons-521895"
	I0814 16:10:06.155816   21883 start.go:93] Provisioning new machine with config: &{Name:addons-521895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-521895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:10:06.155887   21883 start.go:125] createHost starting for "" (driver="kvm2")
	I0814 16:10:06.157473   21883 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0814 16:10:06.157631   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:06.157680   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:06.171750   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0814 16:10:06.172210   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:06.172751   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:06.172771   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:06.173130   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:06.173373   21883 main.go:141] libmachine: (addons-521895) Calling .GetMachineName
	I0814 16:10:06.173580   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:06.173769   21883 start.go:159] libmachine.API.Create for "addons-521895" (driver="kvm2")
	I0814 16:10:06.173804   21883 client.go:168] LocalClient.Create starting
	I0814 16:10:06.173856   21883 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem
	I0814 16:10:06.373032   21883 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem
	I0814 16:10:06.467215   21883 main.go:141] libmachine: Running pre-create checks...
	I0814 16:10:06.467238   21883 main.go:141] libmachine: (addons-521895) Calling .PreCreateCheck
	I0814 16:10:06.467777   21883 main.go:141] libmachine: (addons-521895) Calling .GetConfigRaw
	I0814 16:10:06.468187   21883 main.go:141] libmachine: Creating machine...
	I0814 16:10:06.468201   21883 main.go:141] libmachine: (addons-521895) Calling .Create
	I0814 16:10:06.468354   21883 main.go:141] libmachine: (addons-521895) Creating KVM machine...
	I0814 16:10:06.469538   21883 main.go:141] libmachine: (addons-521895) DBG | found existing default KVM network
	I0814 16:10:06.470305   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:06.470159   21904 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0814 16:10:06.470332   21883 main.go:141] libmachine: (addons-521895) DBG | created network xml: 
	I0814 16:10:06.470345   21883 main.go:141] libmachine: (addons-521895) DBG | <network>
	I0814 16:10:06.470356   21883 main.go:141] libmachine: (addons-521895) DBG |   <name>mk-addons-521895</name>
	I0814 16:10:06.470369   21883 main.go:141] libmachine: (addons-521895) DBG |   <dns enable='no'/>
	I0814 16:10:06.470379   21883 main.go:141] libmachine: (addons-521895) DBG |   
	I0814 16:10:06.470391   21883 main.go:141] libmachine: (addons-521895) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0814 16:10:06.470401   21883 main.go:141] libmachine: (addons-521895) DBG |     <dhcp>
	I0814 16:10:06.470411   21883 main.go:141] libmachine: (addons-521895) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0814 16:10:06.470424   21883 main.go:141] libmachine: (addons-521895) DBG |     </dhcp>
	I0814 16:10:06.470430   21883 main.go:141] libmachine: (addons-521895) DBG |   </ip>
	I0814 16:10:06.470435   21883 main.go:141] libmachine: (addons-521895) DBG |   
	I0814 16:10:06.470441   21883 main.go:141] libmachine: (addons-521895) DBG | </network>
	I0814 16:10:06.470447   21883 main.go:141] libmachine: (addons-521895) DBG | 
	I0814 16:10:06.475921   21883 main.go:141] libmachine: (addons-521895) DBG | trying to create private KVM network mk-addons-521895 192.168.39.0/24...
	I0814 16:10:06.539319   21883 main.go:141] libmachine: (addons-521895) DBG | private KVM network mk-addons-521895 192.168.39.0/24 created
	I0814 16:10:06.539383   21883 main.go:141] libmachine: (addons-521895) Setting up store path in /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895 ...
	I0814 16:10:06.539416   21883 main.go:141] libmachine: (addons-521895) Building disk image from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 16:10:06.539489   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:06.539365   21904 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:10:06.539609   21883 main.go:141] libmachine: (addons-521895) Downloading /home/jenkins/minikube-integration/19446-13977/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 16:10:06.790932   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:06.790767   21904 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa...
	I0814 16:10:07.007275   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:07.007131   21904 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/addons-521895.rawdisk...
	I0814 16:10:07.007339   21883 main.go:141] libmachine: (addons-521895) DBG | Writing magic tar header
	I0814 16:10:07.007398   21883 main.go:141] libmachine: (addons-521895) DBG | Writing SSH key tar header
	I0814 16:10:07.007436   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:07.007248   21904 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895 ...
	I0814 16:10:07.007477   21883 main.go:141] libmachine: (addons-521895) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895 (perms=drwx------)
	I0814 16:10:07.007490   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895
	I0814 16:10:07.007497   21883 main.go:141] libmachine: (addons-521895) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines (perms=drwxr-xr-x)
	I0814 16:10:07.007509   21883 main.go:141] libmachine: (addons-521895) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube (perms=drwxr-xr-x)
	I0814 16:10:07.007518   21883 main.go:141] libmachine: (addons-521895) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977 (perms=drwxrwxr-x)
	I0814 16:10:07.007532   21883 main.go:141] libmachine: (addons-521895) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 16:10:07.007542   21883 main.go:141] libmachine: (addons-521895) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 16:10:07.007555   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines
	I0814 16:10:07.007571   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:10:07.007583   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977
	I0814 16:10:07.007600   21883 main.go:141] libmachine: (addons-521895) Creating domain...
	I0814 16:10:07.007610   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 16:10:07.007624   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home/jenkins
	I0814 16:10:07.007632   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home
	I0814 16:10:07.007647   21883 main.go:141] libmachine: (addons-521895) DBG | Skipping /home - not owner
	I0814 16:10:07.008634   21883 main.go:141] libmachine: (addons-521895) define libvirt domain using xml: 
	I0814 16:10:07.008658   21883 main.go:141] libmachine: (addons-521895) <domain type='kvm'>
	I0814 16:10:07.008669   21883 main.go:141] libmachine: (addons-521895)   <name>addons-521895</name>
	I0814 16:10:07.008677   21883 main.go:141] libmachine: (addons-521895)   <memory unit='MiB'>4000</memory>
	I0814 16:10:07.008706   21883 main.go:141] libmachine: (addons-521895)   <vcpu>2</vcpu>
	I0814 16:10:07.008729   21883 main.go:141] libmachine: (addons-521895)   <features>
	I0814 16:10:07.008736   21883 main.go:141] libmachine: (addons-521895)     <acpi/>
	I0814 16:10:07.008743   21883 main.go:141] libmachine: (addons-521895)     <apic/>
	I0814 16:10:07.008749   21883 main.go:141] libmachine: (addons-521895)     <pae/>
	I0814 16:10:07.008755   21883 main.go:141] libmachine: (addons-521895)     
	I0814 16:10:07.008763   21883 main.go:141] libmachine: (addons-521895)   </features>
	I0814 16:10:07.008768   21883 main.go:141] libmachine: (addons-521895)   <cpu mode='host-passthrough'>
	I0814 16:10:07.008777   21883 main.go:141] libmachine: (addons-521895)   
	I0814 16:10:07.008791   21883 main.go:141] libmachine: (addons-521895)   </cpu>
	I0814 16:10:07.008802   21883 main.go:141] libmachine: (addons-521895)   <os>
	I0814 16:10:07.008833   21883 main.go:141] libmachine: (addons-521895)     <type>hvm</type>
	I0814 16:10:07.008851   21883 main.go:141] libmachine: (addons-521895)     <boot dev='cdrom'/>
	I0814 16:10:07.008865   21883 main.go:141] libmachine: (addons-521895)     <boot dev='hd'/>
	I0814 16:10:07.008877   21883 main.go:141] libmachine: (addons-521895)     <bootmenu enable='no'/>
	I0814 16:10:07.008889   21883 main.go:141] libmachine: (addons-521895)   </os>
	I0814 16:10:07.008900   21883 main.go:141] libmachine: (addons-521895)   <devices>
	I0814 16:10:07.008909   21883 main.go:141] libmachine: (addons-521895)     <disk type='file' device='cdrom'>
	I0814 16:10:07.008928   21883 main.go:141] libmachine: (addons-521895)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/boot2docker.iso'/>
	I0814 16:10:07.008937   21883 main.go:141] libmachine: (addons-521895)       <target dev='hdc' bus='scsi'/>
	I0814 16:10:07.008944   21883 main.go:141] libmachine: (addons-521895)       <readonly/>
	I0814 16:10:07.008956   21883 main.go:141] libmachine: (addons-521895)     </disk>
	I0814 16:10:07.008968   21883 main.go:141] libmachine: (addons-521895)     <disk type='file' device='disk'>
	I0814 16:10:07.008982   21883 main.go:141] libmachine: (addons-521895)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 16:10:07.009000   21883 main.go:141] libmachine: (addons-521895)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/addons-521895.rawdisk'/>
	I0814 16:10:07.009012   21883 main.go:141] libmachine: (addons-521895)       <target dev='hda' bus='virtio'/>
	I0814 16:10:07.009020   21883 main.go:141] libmachine: (addons-521895)     </disk>
	I0814 16:10:07.009028   21883 main.go:141] libmachine: (addons-521895)     <interface type='network'>
	I0814 16:10:07.009040   21883 main.go:141] libmachine: (addons-521895)       <source network='mk-addons-521895'/>
	I0814 16:10:07.009053   21883 main.go:141] libmachine: (addons-521895)       <model type='virtio'/>
	I0814 16:10:07.009067   21883 main.go:141] libmachine: (addons-521895)     </interface>
	I0814 16:10:07.009080   21883 main.go:141] libmachine: (addons-521895)     <interface type='network'>
	I0814 16:10:07.009090   21883 main.go:141] libmachine: (addons-521895)       <source network='default'/>
	I0814 16:10:07.009099   21883 main.go:141] libmachine: (addons-521895)       <model type='virtio'/>
	I0814 16:10:07.009107   21883 main.go:141] libmachine: (addons-521895)     </interface>
	I0814 16:10:07.009114   21883 main.go:141] libmachine: (addons-521895)     <serial type='pty'>
	I0814 16:10:07.009124   21883 main.go:141] libmachine: (addons-521895)       <target port='0'/>
	I0814 16:10:07.009138   21883 main.go:141] libmachine: (addons-521895)     </serial>
	I0814 16:10:07.009153   21883 main.go:141] libmachine: (addons-521895)     <console type='pty'>
	I0814 16:10:07.009167   21883 main.go:141] libmachine: (addons-521895)       <target type='serial' port='0'/>
	I0814 16:10:07.009174   21883 main.go:141] libmachine: (addons-521895)     </console>
	I0814 16:10:07.009179   21883 main.go:141] libmachine: (addons-521895)     <rng model='virtio'>
	I0814 16:10:07.009187   21883 main.go:141] libmachine: (addons-521895)       <backend model='random'>/dev/random</backend>
	I0814 16:10:07.009192   21883 main.go:141] libmachine: (addons-521895)     </rng>
	I0814 16:10:07.009205   21883 main.go:141] libmachine: (addons-521895)     
	I0814 16:10:07.009213   21883 main.go:141] libmachine: (addons-521895)     
	I0814 16:10:07.009217   21883 main.go:141] libmachine: (addons-521895)   </devices>
	I0814 16:10:07.009224   21883 main.go:141] libmachine: (addons-521895) </domain>
	I0814 16:10:07.009230   21883 main.go:141] libmachine: (addons-521895) 
	I0814 16:10:07.014772   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:37:48:6c in network default
	I0814 16:10:07.015343   21883 main.go:141] libmachine: (addons-521895) Ensuring networks are active...
	I0814 16:10:07.015368   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:07.015996   21883 main.go:141] libmachine: (addons-521895) Ensuring network default is active
	I0814 16:10:07.016257   21883 main.go:141] libmachine: (addons-521895) Ensuring network mk-addons-521895 is active
	I0814 16:10:07.016769   21883 main.go:141] libmachine: (addons-521895) Getting domain xml...
	I0814 16:10:07.017354   21883 main.go:141] libmachine: (addons-521895) Creating domain...
	I0814 16:10:08.505220   21883 main.go:141] libmachine: (addons-521895) Waiting to get IP...
	I0814 16:10:08.505999   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:08.506400   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:08.506468   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:08.506393   21904 retry.go:31] will retry after 213.210861ms: waiting for machine to come up
	I0814 16:10:08.720879   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:08.721362   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:08.721392   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:08.721320   21904 retry.go:31] will retry after 336.947709ms: waiting for machine to come up
	I0814 16:10:09.059913   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:09.060313   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:09.060336   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:09.060276   21904 retry.go:31] will retry after 460.065602ms: waiting for machine to come up
	I0814 16:10:09.522017   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:09.522500   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:09.522521   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:09.522458   21904 retry.go:31] will retry after 501.941374ms: waiting for machine to come up
	I0814 16:10:10.026142   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:10.026609   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:10.026636   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:10.026543   21904 retry.go:31] will retry after 597.530335ms: waiting for machine to come up
	I0814 16:10:10.625427   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:10.625850   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:10.625883   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:10.625803   21904 retry.go:31] will retry after 663.235732ms: waiting for machine to come up
	I0814 16:10:11.290110   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:11.290474   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:11.290503   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:11.290436   21904 retry.go:31] will retry after 724.896752ms: waiting for machine to come up
	I0814 16:10:12.017557   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:12.017965   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:12.018000   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:12.017910   21904 retry.go:31] will retry after 1.368272068s: waiting for machine to come up
	I0814 16:10:13.388301   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:13.388796   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:13.388822   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:13.388762   21904 retry.go:31] will retry after 1.65786077s: waiting for machine to come up
	I0814 16:10:15.048569   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:15.048973   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:15.048995   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:15.048927   21904 retry.go:31] will retry after 1.882924604s: waiting for machine to come up
	I0814 16:10:16.933623   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:16.934070   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:16.934096   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:16.934015   21904 retry.go:31] will retry after 2.299175394s: waiting for machine to come up
	I0814 16:10:19.236440   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:19.236924   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:19.236953   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:19.236889   21904 retry.go:31] will retry after 2.528572299s: waiting for machine to come up
	I0814 16:10:21.766926   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:21.767229   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:21.767249   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:21.767188   21904 retry.go:31] will retry after 3.003549239s: waiting for machine to come up
	I0814 16:10:24.774309   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:24.774732   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:24.774754   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:24.774697   21904 retry.go:31] will retry after 3.710828731s: waiting for machine to come up
	I0814 16:10:28.488500   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.488945   21883 main.go:141] libmachine: (addons-521895) Found IP for machine: 192.168.39.170
	I0814 16:10:28.488968   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has current primary IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.488974   21883 main.go:141] libmachine: (addons-521895) Reserving static IP address...
	I0814 16:10:28.489472   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find host DHCP lease matching {name: "addons-521895", mac: "52:54:00:8a:83:8f", ip: "192.168.39.170"} in network mk-addons-521895
	I0814 16:10:28.558975   21883 main.go:141] libmachine: (addons-521895) DBG | Getting to WaitForSSH function...
	I0814 16:10:28.559008   21883 main.go:141] libmachine: (addons-521895) Reserved static IP address: 192.168.39.170
	I0814 16:10:28.559021   21883 main.go:141] libmachine: (addons-521895) Waiting for SSH to be available...
	I0814 16:10:28.561385   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.561823   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:28.561858   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.562054   21883 main.go:141] libmachine: (addons-521895) DBG | Using SSH client type: external
	I0814 16:10:28.562084   21883 main.go:141] libmachine: (addons-521895) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa (-rw-------)
	I0814 16:10:28.562128   21883 main.go:141] libmachine: (addons-521895) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 16:10:28.562145   21883 main.go:141] libmachine: (addons-521895) DBG | About to run SSH command:
	I0814 16:10:28.562160   21883 main.go:141] libmachine: (addons-521895) DBG | exit 0
	I0814 16:10:28.691223   21883 main.go:141] libmachine: (addons-521895) DBG | SSH cmd err, output: <nil>: 
	I0814 16:10:28.691524   21883 main.go:141] libmachine: (addons-521895) KVM machine creation complete!
	I0814 16:10:28.691862   21883 main.go:141] libmachine: (addons-521895) Calling .GetConfigRaw
	I0814 16:10:28.692374   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:28.692548   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:28.692689   21883 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 16:10:28.692700   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:28.694191   21883 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 16:10:28.694205   21883 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 16:10:28.694210   21883 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 16:10:28.694216   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:28.696636   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.697107   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:28.697131   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.697278   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:28.697438   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:28.697555   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:28.697701   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:28.697892   21883 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:28.698064   21883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0814 16:10:28.698078   21883 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 16:10:28.794350   21883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:10:28.794371   21883 main.go:141] libmachine: Detecting the provisioner...
	I0814 16:10:28.794379   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:28.796898   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.797259   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:28.797279   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.797414   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:28.797597   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:28.797746   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:28.797896   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:28.798063   21883 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:28.798236   21883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0814 16:10:28.798249   21883 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 16:10:28.895484   21883 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 16:10:28.895582   21883 main.go:141] libmachine: found compatible host: buildroot
	I0814 16:10:28.895598   21883 main.go:141] libmachine: Provisioning with buildroot...
	I0814 16:10:28.895608   21883 main.go:141] libmachine: (addons-521895) Calling .GetMachineName
	I0814 16:10:28.895883   21883 buildroot.go:166] provisioning hostname "addons-521895"
	I0814 16:10:28.895906   21883 main.go:141] libmachine: (addons-521895) Calling .GetMachineName
	I0814 16:10:28.896099   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:28.898660   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.899046   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:28.899062   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.899194   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:28.899373   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:28.899502   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:28.899626   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:28.899764   21883 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:28.899931   21883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0814 16:10:28.899944   21883 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-521895 && echo "addons-521895" | sudo tee /etc/hostname
	I0814 16:10:29.012661   21883 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-521895
	
	I0814 16:10:29.012691   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:29.015369   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.015724   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.015757   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.015848   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:29.016037   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.016205   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.016336   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:29.016497   21883 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:29.016666   21883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0814 16:10:29.016680   21883 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-521895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-521895/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-521895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 16:10:29.123805   21883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:10:29.123837   21883 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 16:10:29.123926   21883 buildroot.go:174] setting up certificates
	I0814 16:10:29.123944   21883 provision.go:84] configureAuth start
	I0814 16:10:29.123964   21883 main.go:141] libmachine: (addons-521895) Calling .GetMachineName
	I0814 16:10:29.124300   21883 main.go:141] libmachine: (addons-521895) Calling .GetIP
	I0814 16:10:29.127098   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.127615   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.127644   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.127840   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:29.130023   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.130326   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.130353   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.130502   21883 provision.go:143] copyHostCerts
	I0814 16:10:29.130655   21883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 16:10:29.130822   21883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 16:10:29.130920   21883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 16:10:29.130995   21883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.addons-521895 san=[127.0.0.1 192.168.39.170 addons-521895 localhost minikube]
	I0814 16:10:29.392495   21883 provision.go:177] copyRemoteCerts
	I0814 16:10:29.392547   21883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 16:10:29.392568   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:29.394916   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.395267   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.395292   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.395450   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:29.395651   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.395788   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:29.395922   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:29.476686   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 16:10:29.498787   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 16:10:29.520616   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0814 16:10:29.543128   21883 provision.go:87] duration metric: took 419.159107ms to configureAuth
	I0814 16:10:29.543167   21883 buildroot.go:189] setting minikube options for container-runtime
	I0814 16:10:29.543361   21883 config.go:182] Loaded profile config "addons-521895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:10:29.543448   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:29.546123   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.546576   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.546602   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.546821   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:29.547012   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.547135   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.547291   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:29.547476   21883 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:29.547639   21883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0814 16:10:29.547658   21883 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 16:10:29.802009   21883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 16:10:29.802042   21883 main.go:141] libmachine: Checking connection to Docker...
	I0814 16:10:29.802056   21883 main.go:141] libmachine: (addons-521895) Calling .GetURL
	I0814 16:10:29.803354   21883 main.go:141] libmachine: (addons-521895) DBG | Using libvirt version 6000000
	I0814 16:10:29.805409   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.805666   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.805690   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.805847   21883 main.go:141] libmachine: Docker is up and running!
	I0814 16:10:29.805869   21883 main.go:141] libmachine: Reticulating splines...
	I0814 16:10:29.805879   21883 client.go:171] duration metric: took 23.632061619s to LocalClient.Create
	I0814 16:10:29.805908   21883 start.go:167] duration metric: took 23.632142197s to libmachine.API.Create "addons-521895"
	I0814 16:10:29.805929   21883 start.go:293] postStartSetup for "addons-521895" (driver="kvm2")
	I0814 16:10:29.805942   21883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 16:10:29.805963   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:29.806237   21883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 16:10:29.806261   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:29.808336   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.808653   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.808679   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.808818   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:29.808991   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.809141   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:29.809279   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:29.889298   21883 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 16:10:29.893436   21883 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 16:10:29.893461   21883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 16:10:29.893521   21883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 16:10:29.893549   21883 start.go:296] duration metric: took 87.611334ms for postStartSetup
	I0814 16:10:29.893578   21883 main.go:141] libmachine: (addons-521895) Calling .GetConfigRaw
	I0814 16:10:29.894081   21883 main.go:141] libmachine: (addons-521895) Calling .GetIP
	I0814 16:10:29.896884   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.897150   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.897178   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.897446   21883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/config.json ...
	I0814 16:10:29.897619   21883 start.go:128] duration metric: took 23.741722706s to createHost
	I0814 16:10:29.897647   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:29.899839   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.900131   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.900176   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.900275   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:29.900448   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.900602   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.900715   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:29.900889   21883 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:29.901114   21883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0814 16:10:29.901129   21883 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 16:10:29.999778   21883 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723651829.977425271
	
	I0814 16:10:29.999799   21883 fix.go:216] guest clock: 1723651829.977425271
	I0814 16:10:29.999807   21883 fix.go:229] Guest: 2024-08-14 16:10:29.977425271 +0000 UTC Remote: 2024-08-14 16:10:29.89763113 +0000 UTC m=+23.840249664 (delta=79.794141ms)
	I0814 16:10:29.999826   21883 fix.go:200] guest clock delta is within tolerance: 79.794141ms
	I0814 16:10:29.999831   21883 start.go:83] releasing machines lock for "addons-521895", held for 23.844024817s
	I0814 16:10:29.999849   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:30.000127   21883 main.go:141] libmachine: (addons-521895) Calling .GetIP
	I0814 16:10:30.002906   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:30.003230   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:30.003261   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:30.003381   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:30.003954   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:30.004196   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:30.004266   21883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 16:10:30.004312   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:30.004428   21883 ssh_runner.go:195] Run: cat /version.json
	I0814 16:10:30.004447   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:30.007808   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:30.007966   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:30.008197   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:30.008220   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:30.008408   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:30.008534   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:30.008561   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:30.008571   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:30.008693   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:30.008799   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:30.008872   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:30.009030   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:30.009052   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:30.009195   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:30.120216   21883 ssh_runner.go:195] Run: systemctl --version
	I0814 16:10:30.125753   21883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 16:10:30.280905   21883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 16:10:30.286714   21883 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 16:10:30.286775   21883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:10:30.302067   21883 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 16:10:30.302090   21883 start.go:495] detecting cgroup driver to use...
	I0814 16:10:30.302142   21883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 16:10:30.317711   21883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 16:10:30.330349   21883 docker.go:217] disabling cri-docker service (if available) ...
	I0814 16:10:30.330394   21883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 16:10:30.343081   21883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 16:10:30.355658   21883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 16:10:30.466030   21883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 16:10:30.628663   21883 docker.go:233] disabling docker service ...
	I0814 16:10:30.628743   21883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 16:10:30.642641   21883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 16:10:30.654798   21883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 16:10:30.760830   21883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 16:10:30.869341   21883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 16:10:30.883051   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 16:10:30.900734   21883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 16:10:30.900790   21883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.910788   21883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 16:10:30.910846   21883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.921453   21883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.931882   21883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.941563   21883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 16:10:30.951596   21883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.961498   21883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.977531   21883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.987513   21883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 16:10:30.996778   21883 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 16:10:30.996837   21883 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 16:10:31.009574   21883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 16:10:31.018737   21883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:10:31.131242   21883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 16:10:31.265302   21883 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 16:10:31.265418   21883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 16:10:31.269437   21883 start.go:563] Will wait 60s for crictl version
	I0814 16:10:31.269505   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:10:31.272777   21883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 16:10:31.309296   21883 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 16:10:31.309426   21883 ssh_runner.go:195] Run: crio --version
	I0814 16:10:31.338286   21883 ssh_runner.go:195] Run: crio --version
	I0814 16:10:31.364654   21883 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 16:10:31.365724   21883 main.go:141] libmachine: (addons-521895) Calling .GetIP
	I0814 16:10:31.368040   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:31.368513   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:31.368539   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:31.368794   21883 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 16:10:31.372348   21883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:10:31.383506   21883 kubeadm.go:883] updating cluster {Name:addons-521895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-521895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 16:10:31.383598   21883 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:10:31.383687   21883 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:10:31.412955   21883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 16:10:31.413014   21883 ssh_runner.go:195] Run: which lz4
	I0814 16:10:31.416518   21883 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 16:10:31.420240   21883 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 16:10:31.420268   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 16:10:32.533117   21883 crio.go:462] duration metric: took 1.11663254s to copy over tarball
	I0814 16:10:32.533201   21883 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 16:10:34.622945   21883 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.089710145s)
	I0814 16:10:34.622979   21883 crio.go:469] duration metric: took 2.089833263s to extract the tarball
	I0814 16:10:34.622987   21883 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 16:10:34.659111   21883 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:10:34.712778   21883 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 16:10:34.712802   21883 cache_images.go:84] Images are preloaded, skipping loading
	I0814 16:10:34.712810   21883 kubeadm.go:934] updating node { 192.168.39.170 8443 v1.31.0 crio true true} ...
	I0814 16:10:34.712902   21883 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-521895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-521895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 16:10:34.712964   21883 ssh_runner.go:195] Run: crio config
	I0814 16:10:34.764521   21883 cni.go:84] Creating CNI manager for ""
	I0814 16:10:34.764539   21883 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 16:10:34.764550   21883 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 16:10:34.764570   21883 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-521895 NodeName:addons-521895 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 16:10:34.764704   21883 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-521895"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 16:10:34.764758   21883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 16:10:34.774903   21883 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 16:10:34.774960   21883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 16:10:34.784387   21883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0814 16:10:34.799459   21883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 16:10:34.814188   21883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0814 16:10:34.829136   21883 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I0814 16:10:34.832558   21883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:10:34.843692   21883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:10:34.962207   21883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:10:34.977962   21883 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895 for IP: 192.168.39.170
	I0814 16:10:34.977985   21883 certs.go:194] generating shared ca certs ...
	I0814 16:10:34.978000   21883 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:34.978138   21883 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 16:10:35.198673   21883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt ...
	I0814 16:10:35.198703   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt: {Name:mk62824be8e10bd263c0dd5720a3117b18ac9879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.198915   21883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key ...
	I0814 16:10:35.198931   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key: {Name:mk574395626194e124be99961a17bf1bc61653b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.199059   21883 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 16:10:35.305488   21883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt ...
	I0814 16:10:35.305518   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt: {Name:mk3bb83a0fb2ed49a81ef6a63fce51ca58051613 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.305702   21883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key ...
	I0814 16:10:35.305717   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key: {Name:mk32534c9350755c75499694cb013600e4c1ce82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.305811   21883 certs.go:256] generating profile certs ...
	I0814 16:10:35.305876   21883 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.key
	I0814 16:10:35.305895   21883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt with IP's: []
	I0814 16:10:35.442627   21883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt ...
	I0814 16:10:35.442657   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: {Name:mkbab4ed7e6d5971126674d442590fd6728b9eec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.442837   21883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.key ...
	I0814 16:10:35.442851   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.key: {Name:mk1782a3916e9e3308a4f8c0920aef28bba5d828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.442977   21883 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.key.65557067
	I0814 16:10:35.442999   21883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.crt.65557067 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170]
	I0814 16:10:35.633944   21883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.crt.65557067 ...
	I0814 16:10:35.633973   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.crt.65557067: {Name:mk9c9d65275d11733d48a9bb792c3edff9dbb01c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.634140   21883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.key.65557067 ...
	I0814 16:10:35.634157   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.key.65557067: {Name:mka72a6e2cc95c34d1b74708936e1ed30a52196a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.634250   21883 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.crt.65557067 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.crt
	I0814 16:10:35.634341   21883 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.key.65557067 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.key
	I0814 16:10:35.634421   21883 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.key
	I0814 16:10:35.634446   21883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.crt with IP's: []
	I0814 16:10:35.804439   21883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.crt ...
	I0814 16:10:35.804465   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.crt: {Name:mk9edbd5c2ee2861498ab8a21bdc910e43daaa9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.804622   21883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.key ...
	I0814 16:10:35.804633   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.key: {Name:mk495301dade9c4e996c4c2a8a360d9a8e9b4707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.804786   21883 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 16:10:35.804817   21883 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 16:10:35.804840   21883 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 16:10:35.804865   21883 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 16:10:35.805386   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 16:10:35.828315   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 16:10:35.849948   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 16:10:35.870935   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 16:10:35.891433   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0814 16:10:35.912401   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 16:10:35.933755   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 16:10:35.955710   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 16:10:35.977404   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 16:10:35.998289   21883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 16:10:36.013514   21883 ssh_runner.go:195] Run: openssl version
	I0814 16:10:36.018855   21883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 16:10:36.028583   21883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:10:36.032680   21883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:10:36.032726   21883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:10:36.038176   21883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 16:10:36.047985   21883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 16:10:36.051527   21883 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 16:10:36.051585   21883 kubeadm.go:392] StartCluster: {Name:addons-521895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-521895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:10:36.051676   21883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 16:10:36.051717   21883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 16:10:36.091799   21883 cri.go:89] found id: ""
	I0814 16:10:36.091878   21883 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 16:10:36.101280   21883 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 16:10:36.110142   21883 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 16:10:36.118798   21883 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 16:10:36.118812   21883 kubeadm.go:157] found existing configuration files:
	
	I0814 16:10:36.118853   21883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 16:10:36.127079   21883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 16:10:36.127124   21883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 16:10:36.135493   21883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 16:10:36.143457   21883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 16:10:36.143497   21883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 16:10:36.151866   21883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 16:10:36.160052   21883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 16:10:36.160088   21883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 16:10:36.168543   21883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 16:10:36.176858   21883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 16:10:36.176916   21883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 16:10:36.185846   21883 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 16:10:36.240444   21883 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 16:10:36.240571   21883 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 16:10:36.339722   21883 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 16:10:36.339836   21883 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 16:10:36.339923   21883 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 16:10:36.350867   21883 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 16:10:36.443055   21883 out.go:204]   - Generating certificates and keys ...
	I0814 16:10:36.443181   21883 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 16:10:36.443276   21883 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 16:10:36.547559   21883 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 16:10:36.657343   21883 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 16:10:36.740110   21883 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 16:10:37.022509   21883 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 16:10:37.246483   21883 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 16:10:37.246671   21883 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-521895 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0814 16:10:37.323623   21883 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 16:10:37.323768   21883 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-521895 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0814 16:10:37.568386   21883 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 16:10:37.679115   21883 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 16:10:37.783745   21883 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 16:10:37.783817   21883 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 16:10:37.867783   21883 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 16:10:38.225454   21883 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 16:10:38.362436   21883 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 16:10:38.537998   21883 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 16:10:38.658136   21883 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 16:10:38.658641   21883 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 16:10:38.661071   21883 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 16:10:38.663034   21883 out.go:204]   - Booting up control plane ...
	I0814 16:10:38.663144   21883 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 16:10:38.663227   21883 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 16:10:38.663301   21883 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 16:10:38.681360   21883 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 16:10:38.688368   21883 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 16:10:38.688439   21883 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 16:10:38.815510   21883 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 16:10:38.815656   21883 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 16:10:39.816545   21883 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001739927s
	I0814 16:10:39.816650   21883 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 16:10:44.815025   21883 kubeadm.go:310] [api-check] The API server is healthy after 5.001358779s
	I0814 16:10:44.827551   21883 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 16:10:44.845146   21883 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 16:10:44.874557   21883 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 16:10:44.874750   21883 kubeadm.go:310] [mark-control-plane] Marking the node addons-521895 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 16:10:44.889264   21883 kubeadm.go:310] [bootstrap-token] Using token: vwipfe.56fv3zfcv1u9rrs2
	I0814 16:10:44.890619   21883 out.go:204]   - Configuring RBAC rules ...
	I0814 16:10:44.890770   21883 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 16:10:44.897257   21883 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 16:10:44.912935   21883 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 16:10:44.917697   21883 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 16:10:44.924877   21883 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 16:10:44.929196   21883 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 16:10:45.223610   21883 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 16:10:45.713421   21883 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 16:10:46.221131   21883 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 16:10:46.221918   21883 kubeadm.go:310] 
	I0814 16:10:46.222019   21883 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 16:10:46.222055   21883 kubeadm.go:310] 
	I0814 16:10:46.222147   21883 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 16:10:46.222162   21883 kubeadm.go:310] 
	I0814 16:10:46.222198   21883 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 16:10:46.222278   21883 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 16:10:46.222367   21883 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 16:10:46.222377   21883 kubeadm.go:310] 
	I0814 16:10:46.222440   21883 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 16:10:46.222450   21883 kubeadm.go:310] 
	I0814 16:10:46.222516   21883 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 16:10:46.222526   21883 kubeadm.go:310] 
	I0814 16:10:46.222604   21883 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 16:10:46.222739   21883 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 16:10:46.222834   21883 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 16:10:46.222842   21883 kubeadm.go:310] 
	I0814 16:10:46.222946   21883 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 16:10:46.223042   21883 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 16:10:46.223068   21883 kubeadm.go:310] 
	I0814 16:10:46.223176   21883 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vwipfe.56fv3zfcv1u9rrs2 \
	I0814 16:10:46.223303   21883 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 16:10:46.223354   21883 kubeadm.go:310] 	--control-plane 
	I0814 16:10:46.223364   21883 kubeadm.go:310] 
	I0814 16:10:46.223466   21883 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 16:10:46.223475   21883 kubeadm.go:310] 
	I0814 16:10:46.223590   21883 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vwipfe.56fv3zfcv1u9rrs2 \
	I0814 16:10:46.223744   21883 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 16:10:46.224384   21883 kubeadm.go:310] W0814 16:10:36.221186     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 16:10:46.224764   21883 kubeadm.go:310] W0814 16:10:36.222383     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 16:10:46.224862   21883 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 16:10:46.224885   21883 cni.go:84] Creating CNI manager for ""
	I0814 16:10:46.224894   21883 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 16:10:46.226761   21883 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 16:10:46.228163   21883 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 16:10:46.240120   21883 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 16:10:46.257403   21883 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 16:10:46.257490   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:46.257490   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-521895 minikube.k8s.io/updated_at=2024_08_14T16_10_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=addons-521895 minikube.k8s.io/primary=true
	I0814 16:10:46.393678   21883 ops.go:34] apiserver oom_adj: -16
	I0814 16:10:46.393717   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:46.893842   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:47.394570   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:47.894347   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:48.394498   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:48.894638   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:49.393849   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:49.894009   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:50.394030   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:50.483045   21883 kubeadm.go:1113] duration metric: took 4.225619558s to wait for elevateKubeSystemPrivileges
	I0814 16:10:50.483081   21883 kubeadm.go:394] duration metric: took 14.431497273s to StartCluster
	I0814 16:10:50.483103   21883 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:50.483264   21883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:10:50.483795   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:50.484004   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 16:10:50.484043   21883 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:10:50.484092   21883 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0814 16:10:50.484188   21883 addons.go:69] Setting yakd=true in profile "addons-521895"
	I0814 16:10:50.484201   21883 addons.go:69] Setting helm-tiller=true in profile "addons-521895"
	I0814 16:10:50.484211   21883 addons.go:69] Setting gcp-auth=true in profile "addons-521895"
	I0814 16:10:50.484194   21883 addons.go:69] Setting inspektor-gadget=true in profile "addons-521895"
	I0814 16:10:50.484231   21883 addons.go:69] Setting ingress=true in profile "addons-521895"
	I0814 16:10:50.484240   21883 addons.go:234] Setting addon inspektor-gadget=true in "addons-521895"
	I0814 16:10:50.484244   21883 mustload.go:65] Loading cluster: addons-521895
	I0814 16:10:50.484247   21883 addons.go:234] Setting addon ingress=true in "addons-521895"
	I0814 16:10:50.484247   21883 addons.go:69] Setting volcano=true in profile "addons-521895"
	I0814 16:10:50.484248   21883 addons.go:69] Setting storage-provisioner=true in profile "addons-521895"
	I0814 16:10:50.484260   21883 config.go:182] Loaded profile config "addons-521895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:10:50.484270   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484274   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484288   21883 addons.go:234] Setting addon volcano=true in "addons-521895"
	I0814 16:10:50.484305   21883 addons.go:234] Setting addon storage-provisioner=true in "addons-521895"
	I0814 16:10:50.484306   21883 addons.go:69] Setting ingress-dns=true in profile "addons-521895"
	I0814 16:10:50.484321   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484337   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484350   21883 addons.go:234] Setting addon ingress-dns=true in "addons-521895"
	I0814 16:10:50.484384   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484223   21883 addons.go:234] Setting addon helm-tiller=true in "addons-521895"
	I0814 16:10:50.484425   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484448   21883 config.go:182] Loaded profile config "addons-521895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:10:50.484733   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.484752   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.484754   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.484755   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.484762   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.484772   21883 addons.go:69] Setting metrics-server=true in profile "addons-521895"
	I0814 16:10:50.484781   21883 addons.go:69] Setting cloud-spanner=true in profile "addons-521895"
	I0814 16:10:50.484786   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.484794   21883 addons.go:234] Setting addon metrics-server=true in "addons-521895"
	I0814 16:10:50.484800   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.484804   21883 addons.go:234] Setting addon cloud-spanner=true in "addons-521895"
	I0814 16:10:50.484813   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484814   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.484821   21883 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-521895"
	I0814 16:10:50.484837   21883 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-521895"
	I0814 16:10:50.484851   21883 addons.go:69] Setting volumesnapshots=true in profile "addons-521895"
	I0814 16:10:50.484740   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.484857   21883 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-521895"
	I0814 16:10:50.484857   21883 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-521895"
	I0814 16:10:50.484867   21883 addons.go:69] Setting default-storageclass=true in profile "addons-521895"
	I0814 16:10:50.484871   21883 addons.go:234] Setting addon volumesnapshots=true in "addons-521895"
	I0814 16:10:50.484223   21883 addons.go:234] Setting addon yakd=true in "addons-521895"
	I0814 16:10:50.484872   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.484883   21883 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-521895"
	I0814 16:10:50.484883   21883 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-521895"
	I0814 16:10:50.484893   21883 addons.go:69] Setting registry=true in profile "addons-521895"
	I0814 16:10:50.484903   21883 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-521895"
	I0814 16:10:50.484914   21883 addons.go:234] Setting addon registry=true in "addons-521895"
	I0814 16:10:50.484775   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.484998   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.485059   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.485145   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485188   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.485260   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.485468   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485488   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.485571   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485632   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.485641   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.485633   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.485695   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485707   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.485723   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.485856   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485868   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.485983   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485996   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485999   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.486021   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.486054   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.486083   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.486169   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.486532   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.486549   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.486788   21883 out.go:177] * Verifying Kubernetes components...
	I0814 16:10:50.492597   21883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:10:50.507457   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42397
	I0814 16:10:50.507473   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0814 16:10:50.507689   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44811
	I0814 16:10:50.507955   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.509148   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.509178   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.509727   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.509790   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.510105   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.510310   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0814 16:10:50.510885   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.510920   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.511008   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.511464   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.511487   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.511541   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.511824   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.511828   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.512386   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.512435   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.512829   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.512848   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.513186   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.513218   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.514855   21883 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-521895"
	I0814 16:10:50.514898   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.515257   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.515305   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.515499   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.519922   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I0814 16:10:50.520432   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.520447   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.520474   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.520493   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.528014   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.528206   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0814 16:10:50.528327   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I0814 16:10:50.528432   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44889
	I0814 16:10:50.543910   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.550974   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.551016   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0814 16:10:50.551108   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.551130   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.551148   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.551846   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.551865   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.551947   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.552633   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.552671   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.553027   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.553086   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.553102   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.553142   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43657
	I0814 16:10:50.553331   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.553822   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.553861   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.554438   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35569
	I0814 16:10:50.554517   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.554582   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.554593   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.554568   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.555019   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.555111   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.555141   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.555403   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.555423   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.555795   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.555839   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.555920   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.555942   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.556059   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.556411   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.556435   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.556546   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.557982   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.558309   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.558384   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.558938   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.558978   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.559615   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.560037   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.560726   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.560766   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.560981   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.561365   21883 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0814 16:10:50.561413   21883 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 16:10:50.562833   21883 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0814 16:10:50.562861   21883 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0814 16:10:50.562871   21883 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0814 16:10:50.563255   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.572515   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.572669   21883 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 16:10:50.572680   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 16:10:50.572698   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.572779   21883 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0814 16:10:50.572787   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0814 16:10:50.572798   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.572551   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.572843   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.572917   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45805
	I0814 16:10:50.573026   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43051
	I0814 16:10:50.573152   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.573422   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.574217   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.574313   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.574859   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.574155   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.574974   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.575499   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.575926   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.576795   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.576835   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.576977   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.577357   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.577395   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.577411   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.577443   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.577654   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.577879   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.577945   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.577961   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.578150   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.578177   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.578378   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.578668   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.578846   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.579412   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.579436   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.579586   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0814 16:10:50.579821   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.580035   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.580384   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.580431   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.580436   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.580450   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.580799   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0814 16:10:50.580907   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.581393   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.581426   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.581628   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I0814 16:10:50.581639   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.582089   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.582103   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.584259   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43275
	I0814 16:10:50.584290   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0814 16:10:50.584382   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.584413   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.584939   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.584978   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.585242   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.585254   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.585318   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.585383   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.585582   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.585860   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.585877   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.586000   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.586010   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.586408   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.586436   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.586616   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.587410   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.587634   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.587664   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.587971   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.588007   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.593815   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I0814 16:10:50.594354   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.594977   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.594994   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.595480   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.595720   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.597171   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45315
	I0814 16:10:50.597574   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.597676   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.597739   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41671
	I0814 16:10:50.598374   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.598399   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.598527   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.598742   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.599179   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.599196   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.599221   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.599613   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.599770   21883 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0814 16:10:50.600228   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.600268   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.601179   21883 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0814 16:10:50.601196   21883 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0814 16:10:50.601213   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.601440   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43321
	I0814 16:10:50.601985   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.602767   21883 addons.go:234] Setting addon default-storageclass=true in "addons-521895"
	I0814 16:10:50.602808   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.603178   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.603195   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.603227   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.603270   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.603530   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.603698   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.604137   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.604745   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.604777   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.604936   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.605091   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.605219   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.605336   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.612706   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.614440   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0814 16:10:50.614902   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.615495   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.615520   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.615729   21883 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0814 16:10:50.616001   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.616218   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.617077   21883 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0814 16:10:50.617101   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0814 16:10:50.617120   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.620338   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.620758   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44969
	I0814 16:10:50.621049   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.621193   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.621932   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.621972   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.621992   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.622108   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.622339   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.622362   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.622386   21883 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0814 16:10:50.622590   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.622739   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.623561   21883 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 16:10:50.623578   21883 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 16:10:50.623594   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.623602   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.624157   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.626613   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.627272   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.627411   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0814 16:10:50.627521   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.627537   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.627617   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.628213   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.628316   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.628362   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I0814 16:10:50.628915   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.628935   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.629289   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.629466   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.629636   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.629804   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.629979   21883 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0814 16:10:50.630278   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39505
	I0814 16:10:50.630409   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.630924   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.630944   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.631001   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.631001   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.631253   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
	I0814 16:10:50.631750   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.631769   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.632122   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.632208   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.632360   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.632420   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.632547   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.632971   21883 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 16:10:50.633097   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.633113   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.632972   21883 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0814 16:10:50.633686   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.634327   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.635294   21883 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 16:10:50.635402   21883 out.go:177]   - Using image docker.io/busybox:stable
	I0814 16:10:50.635606   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.635645   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.635883   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0814 16:10:50.636293   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.636546   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0814 16:10:50.636727   21883 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0814 16:10:50.636747   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0814 16:10:50.636763   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.637006   21883 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0814 16:10:50.637020   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0814 16:10:50.637037   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.637020   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46343
	I0814 16:10:50.637895   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.638390   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.638405   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.638500   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.638519   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.638970   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.639011   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34627
	I0814 16:10:50.639251   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.639281   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.639580   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.639653   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.639960   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0814 16:10:50.640200   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.640222   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.640552   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.640732   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.641355   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.641889   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0814 16:10:50.642132   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.642157   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.642421   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.642604   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.642838   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.643062   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.643558   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.644041   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0814 16:10:50.644211   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.644467   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.644657   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.644676   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.644919   21883 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0814 16:10:50.644962   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.645017   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.645134   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.645169   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I0814 16:10:50.645819   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0814 16:10:50.645907   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:50.645916   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:50.646075   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:50.646080   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.646087   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:50.646105   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:50.646112   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:50.646195   21883 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0814 16:10:50.646209   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0814 16:10:50.646226   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.646295   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:50.646318   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:50.646326   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	W0814 16:10:50.646394   21883 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0814 16:10:50.646505   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.646832   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0814 16:10:50.647378   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.647926   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.647938   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.648088   21883 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0814 16:10:50.648508   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.649000   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.649086   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.649108   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.649129   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0814 16:10:50.649279   21883 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0814 16:10:50.649289   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0814 16:10:50.649301   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.649477   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.650143   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.650945   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.651285   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.651351   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0814 16:10:50.651698   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.651782   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.651997   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.652174   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.652327   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.652462   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.652543   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.653163   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.653566   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.653826   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0814 16:10:50.653831   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0814 16:10:50.653938   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.654098   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.654169   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.654322   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.654428   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.654506   21883 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0814 16:10:50.654570   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.655198   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0814 16:10:50.655205   21883 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0814 16:10:50.655214   21883 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0814 16:10:50.655215   21883 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0814 16:10:50.655229   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.655229   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.657089   21883 out.go:177]   - Using image docker.io/registry:2.8.3
	I0814 16:10:50.658199   21883 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0814 16:10:50.658218   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0814 16:10:50.658231   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.658234   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.659298   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.659354   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.659494   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.659681   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.659868   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.660017   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.660439   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.661801   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.661801   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.661832   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.661860   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.662016   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.662312   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.662328   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.662353   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.662512   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.662524   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.662857   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.662988   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.663104   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.665645   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40161
	I0814 16:10:50.665973   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.666359   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.666370   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.666722   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.666885   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.668366   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.668555   21883 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 16:10:50.668567   21883 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 16:10:50.668578   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	W0814 16:10:50.669633   21883 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34528->192.168.39.170:22: read: connection reset by peer
	I0814 16:10:50.669653   21883 retry.go:31] will retry after 164.2543ms: ssh: handshake failed: read tcp 192.168.39.1:34528->192.168.39.170:22: read: connection reset by peer
	W0814 16:10:50.669716   21883 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34538->192.168.39.170:22: read: connection reset by peer
	I0814 16:10:50.669726   21883 retry.go:31] will retry after 252.601659ms: ssh: handshake failed: read tcp 192.168.39.1:34538->192.168.39.170:22: read: connection reset by peer
	I0814 16:10:50.670963   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.671292   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.671309   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.671519   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.671674   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.671883   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.672013   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	W0814 16:10:50.672492   21883 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34542->192.168.39.170:22: read: connection reset by peer
	I0814 16:10:50.672507   21883 retry.go:31] will retry after 232.561584ms: ssh: handshake failed: read tcp 192.168.39.1:34542->192.168.39.170:22: read: connection reset by peer
	W0814 16:10:50.834593   21883 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34546->192.168.39.170:22: read: connection reset by peer
	I0814 16:10:50.834622   21883 retry.go:31] will retry after 229.630872ms: ssh: handshake failed: read tcp 192.168.39.1:34546->192.168.39.170:22: read: connection reset by peer
	I0814 16:10:50.949629   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 16:10:51.027681   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0814 16:10:51.062834   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0814 16:10:51.071717   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0814 16:10:51.074511   21883 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 16:10:51.074534   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0814 16:10:51.076116   21883 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0814 16:10:51.076139   21883 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0814 16:10:51.081546   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0814 16:10:51.083581   21883 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0814 16:10:51.083599   21883 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0814 16:10:51.114075   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0814 16:10:51.131524   21883 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0814 16:10:51.131554   21883 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0814 16:10:51.135778   21883 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0814 16:10:51.135797   21883 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0814 16:10:51.151994   21883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:10:51.152070   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 16:10:51.270655   21883 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0814 16:10:51.270684   21883 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0814 16:10:51.276223   21883 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 16:10:51.276243   21883 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 16:10:51.310888   21883 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0814 16:10:51.310916   21883 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0814 16:10:51.311269   21883 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0814 16:10:51.311289   21883 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0814 16:10:51.343122   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 16:10:51.347256   21883 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0814 16:10:51.347285   21883 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0814 16:10:51.486680   21883 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0814 16:10:51.486713   21883 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0814 16:10:51.535277   21883 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0814 16:10:51.535310   21883 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0814 16:10:51.565416   21883 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0814 16:10:51.565450   21883 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0814 16:10:51.567437   21883 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0814 16:10:51.567458   21883 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0814 16:10:51.575297   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0814 16:10:51.594357   21883 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 16:10:51.594390   21883 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 16:10:51.661706   21883 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0814 16:10:51.661730   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0814 16:10:51.710957   21883 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0814 16:10:51.710985   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0814 16:10:51.721696   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0814 16:10:51.721725   21883 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0814 16:10:51.763273   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0814 16:10:51.763302   21883 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0814 16:10:51.795776   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 16:10:51.823344   21883 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0814 16:10:51.823373   21883 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0814 16:10:51.852470   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0814 16:10:51.869448   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0814 16:10:51.891244   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0814 16:10:51.891278   21883 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0814 16:10:51.953475   21883 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 16:10:51.953506   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0814 16:10:52.007299   21883 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0814 16:10:52.007355   21883 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0814 16:10:52.230751   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0814 16:10:52.230778   21883 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0814 16:10:52.268601   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 16:10:52.349164   21883 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0814 16:10:52.349194   21883 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0814 16:10:52.514968   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0814 16:10:52.514998   21883 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0814 16:10:52.652787   21883 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0814 16:10:52.652817   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0814 16:10:52.764861   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0814 16:10:52.764891   21883 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0814 16:10:52.910399   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0814 16:10:53.122923   21883 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0814 16:10:53.122945   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0814 16:10:53.351159   21883 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0814 16:10:53.351182   21883 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0814 16:10:53.648965   21883 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0814 16:10:53.648992   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0814 16:10:53.914056   21883 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0814 16:10:53.914075   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0814 16:10:54.227644   21883 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0814 16:10:54.227675   21883 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0814 16:10:54.677927   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0814 16:10:55.027482   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.077816028s)
	I0814 16:10:55.027522   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.999815031s)
	I0814 16:10:55.027539   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:55.027541   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:55.027552   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:55.027553   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:55.027952   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:55.027970   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:55.027980   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:55.027988   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:55.028060   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:55.028081   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:55.028091   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:55.028100   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:55.028107   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:55.028191   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:55.028206   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:55.028224   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:55.028319   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:55.028346   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:55.028385   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:57.649200   21883 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0814 16:10:57.649236   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:57.652736   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:57.653364   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:57.653399   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:57.653614   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:57.653842   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:57.654042   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:57.654204   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:58.256857   21883 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0814 16:10:58.317832   21883 addons.go:234] Setting addon gcp-auth=true in "addons-521895"
	I0814 16:10:58.317884   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:58.318205   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:58.318232   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:58.333959   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0814 16:10:58.334434   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:58.334926   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:58.334952   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:58.335376   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:58.335935   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:58.335968   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:58.351420   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0814 16:10:58.351818   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:58.352273   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:58.352303   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:58.352583   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:58.352752   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:58.354278   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:58.354519   21883 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0814 16:10:58.354545   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:58.357637   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:58.358080   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:58.358106   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:58.358228   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:58.358409   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:58.358548   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:58.358709   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:59.213723   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.150858257s)
	I0814 16:10:59.213775   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.213786   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.213788   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.142041941s)
	I0814 16:10:59.213827   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.213841   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.213860   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.132290307s)
	I0814 16:10:59.213896   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.213912   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.213931   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.099827462s)
	I0814 16:10:59.213973   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.214056   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.214179   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.418374051s)
	I0814 16:10:59.214205   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.214215   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.214351   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.361819374s)
	I0814 16:10:59.214367   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.214376   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.214435   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.344954677s)
	I0814 16:10:59.213982   21883 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.061965062s)
	I0814 16:10:59.214449   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.214457   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.213994   21883 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.061901768s)
	I0814 16:10:59.214824   21883 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0814 16:10:59.215399   21883 node_ready.go:35] waiting up to 6m0s for node "addons-521895" to be "Ready" ...
	I0814 16:10:59.214070   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.63875231s)
	I0814 16:10:59.215649   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.215660   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.215711   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.215713   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.215728   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.215737   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.215743   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.215745   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.215751   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.215761   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.215768   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.214035   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.870866076s)
	I0814 16:10:59.215991   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216008   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.216028   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.305596942s)
	I0814 16:10:59.215793   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.215807   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216051   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216060   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.215814   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.215832   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216111   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216120   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216128   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.215834   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216157   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216167   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216174   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.215854   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216281   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216290   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216297   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.215962   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.947318144s)
	W0814 16:10:59.216456   21883 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0814 16:10:59.216478   21883 retry.go:31] will retry after 156.382504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0814 16:10:59.216600   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216604   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216606   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216620   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216634   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216637   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216642   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216645   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216651   21883 addons.go:475] Verifying addon metrics-server=true in "addons-521895"
	I0814 16:10:59.216654   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216652   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216664   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.216678   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216686   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216693   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216700   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.216708   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216714   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216638   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216722   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216716   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216694   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216729   21883 addons.go:475] Verifying addon ingress=true in "addons-521895"
	I0814 16:10:59.216749   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216897   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216925   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216932   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216940   21883 addons.go:475] Verifying addon registry=true in "addons-521895"
	I0814 16:10:59.216982   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.217013   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.217441   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.217455   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.217811   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.217835   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.218021   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.218034   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.218044   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.216740   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.218087   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.217851   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.217941   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.218153   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.218162   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.218172   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.218435   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.218449   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.218579   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.218625   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.218642   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.218659   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.218665   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.218670   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.219656   21883 out.go:177] * Verifying registry addon...
	I0814 16:10:59.219716   21883 out.go:177] * Verifying ingress addon...
	I0814 16:10:59.220405   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.220469   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.221006   21883 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-521895 service yakd-dashboard -n yakd-dashboard
	
	I0814 16:10:59.221772   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.221783   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.222651   21883 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0814 16:10:59.222651   21883 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0814 16:10:59.233481   21883 node_ready.go:49] node "addons-521895" has status "Ready":"True"
	I0814 16:10:59.233508   21883 node_ready.go:38] duration metric: took 18.091206ms for node "addons-521895" to be "Ready" ...
	I0814 16:10:59.233521   21883 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:10:59.270255   21883 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0814 16:10:59.270286   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:59.272344   21883 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-7rf58" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.280084   21883 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0814 16:10:59.280123   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:59.323126   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.323171   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.323513   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.323532   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.323562   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.334079   21883 pod_ready.go:92] pod "coredns-6f6b679f8f-7rf58" in "kube-system" namespace has status "Ready":"True"
	I0814 16:10:59.334099   21883 pod_ready.go:81] duration metric: took 61.72445ms for pod "coredns-6f6b679f8f-7rf58" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.334112   21883 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-kdsjh" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.343052   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.343076   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.343356   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.343402   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.343414   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	W0814 16:10:59.343498   21883 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I0814 16:10:59.365587   21883 pod_ready.go:92] pod "coredns-6f6b679f8f-kdsjh" in "kube-system" namespace has status "Ready":"True"
	I0814 16:10:59.365621   21883 pod_ready.go:81] duration metric: took 31.500147ms for pod "coredns-6f6b679f8f-kdsjh" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.365637   21883 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.373460   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 16:10:59.411487   21883 pod_ready.go:92] pod "etcd-addons-521895" in "kube-system" namespace has status "Ready":"True"
	I0814 16:10:59.411516   21883 pod_ready.go:81] duration metric: took 45.869605ms for pod "etcd-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.411531   21883 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.438374   21883 pod_ready.go:92] pod "kube-apiserver-addons-521895" in "kube-system" namespace has status "Ready":"True"
	I0814 16:10:59.438410   21883 pod_ready.go:81] duration metric: took 26.870151ms for pod "kube-apiserver-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.438424   21883 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.618670   21883 pod_ready.go:92] pod "kube-controller-manager-addons-521895" in "kube-system" namespace has status "Ready":"True"
	I0814 16:10:59.618699   21883 pod_ready.go:81] duration metric: took 180.265352ms for pod "kube-controller-manager-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.618715   21883 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-djhvc" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.718829   21883 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-521895" context rescaled to 1 replicas
	I0814 16:10:59.727475   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:59.728010   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:00.204164   21883 pod_ready.go:92] pod "kube-proxy-djhvc" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:00.204187   21883 pod_ready.go:81] duration metric: took 585.463961ms for pod "kube-proxy-djhvc" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:00.204197   21883 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:00.228296   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:00.229033   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:00.421525   21883 pod_ready.go:92] pod "kube-scheduler-addons-521895" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:00.421549   21883 pod_ready.go:81] duration metric: took 217.343558ms for pod "kube-scheduler-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:00.421561   21883 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:00.737427   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:00.740283   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:00.998082   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.320077691s)
	I0814 16:11:00.998122   21883 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.643581233s)
	I0814 16:11:00.998148   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:11:00.998164   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:11:00.998428   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:11:00.998444   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:11:00.998455   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:11:00.998463   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:11:00.998687   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:11:00.998706   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:11:00.998717   21883 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-521895"
	I0814 16:11:01.000172   21883 out.go:177] * Verifying csi-hostpath-driver addon...
	I0814 16:11:01.000178   21883 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 16:11:01.002180   21883 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0814 16:11:01.002813   21883 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0814 16:11:01.003370   21883 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0814 16:11:01.003391   21883 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0814 16:11:01.022962   21883 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0814 16:11:01.022982   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:01.066734   21883 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0814 16:11:01.066761   21883 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0814 16:11:01.173697   21883 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0814 16:11:01.173719   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0814 16:11:01.230008   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:01.230091   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:01.294840   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.921327036s)
	I0814 16:11:01.294893   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:11:01.294907   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:11:01.295194   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:11:01.295215   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:11:01.295226   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:11:01.295234   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:11:01.296532   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:11:01.296544   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:11:01.296564   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:11:01.299035   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0814 16:11:01.508024   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:01.732594   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:01.734026   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:02.007288   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:02.226929   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:02.227936   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:02.427845   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:02.513492   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:02.750663   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:02.750935   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:02.798261   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.499186787s)
	I0814 16:11:02.798344   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:11:02.798362   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:11:02.798695   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:11:02.798714   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:11:02.798734   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:11:02.798800   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:11:02.798818   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:11:02.799075   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:11:02.799121   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:11:02.799105   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:11:02.801052   21883 addons.go:475] Verifying addon gcp-auth=true in "addons-521895"
	I0814 16:11:02.802582   21883 out.go:177] * Verifying gcp-auth addon...
	I0814 16:11:02.804847   21883 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0814 16:11:02.834593   21883 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0814 16:11:02.834621   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:03.009454   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:03.227644   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:03.228212   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:03.308438   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:03.507500   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:03.727614   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:03.728081   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:03.808381   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:04.007784   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:04.226752   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:04.227128   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:04.308760   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:04.428109   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:04.508069   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:04.726882   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:04.727496   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:04.808712   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:05.007695   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:05.227068   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:05.227235   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:05.319526   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:05.507013   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:05.727193   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:05.728531   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:05.808468   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:06.007202   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:06.228045   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:06.229731   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:06.310151   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:06.509983   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:06.730274   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:06.731864   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:06.809160   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:06.929137   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:07.007623   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:07.226586   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:07.227067   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:07.308599   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:07.507221   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:07.727762   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:07.728072   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:07.808804   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:08.007468   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:08.226459   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:08.226744   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:08.308406   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:08.507197   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:08.726628   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:08.727654   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:08.808701   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:09.007296   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:09.226381   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:09.226555   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:09.308684   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:09.428711   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:09.508071   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:09.727115   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:09.727466   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:09.808296   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:10.008280   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:10.228088   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:10.228620   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:10.308211   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:10.507596   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:10.728403   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:10.728601   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:10.808848   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:11.009852   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:11.226895   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:11.228783   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:11.308710   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:11.508200   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:11.726676   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:11.727270   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:11.808294   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:11.927438   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:12.007223   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:12.228152   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:12.228355   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:12.308991   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:12.506855   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:12.727718   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:12.727962   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:12.808073   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:13.008079   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:13.226418   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:13.226929   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:13.308257   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:13.631274   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:13.727828   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:13.728230   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:13.808572   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:13.927651   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:14.007952   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:14.226942   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:14.227724   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:14.308658   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:14.507920   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:14.726550   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:14.726703   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:14.808016   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:15.110669   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:15.227157   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:15.227366   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:15.309275   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:15.507571   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:16.113840   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:16.115033   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:16.115723   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:16.115815   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:16.125813   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:16.227404   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:16.227711   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:16.308389   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:16.507014   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:16.727511   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:16.727606   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:16.808034   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:17.007828   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:17.231581   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:17.231652   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:17.308472   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:17.507173   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:17.726983   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:17.728532   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:17.808925   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:18.006837   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:18.228895   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:18.229082   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:18.328890   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:18.426854   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:18.507771   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:18.727043   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:18.727486   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:18.807980   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:19.008476   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:19.233287   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:19.233786   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:19.308400   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:19.508874   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:19.727155   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:19.728464   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:19.813584   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:20.006667   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:20.228789   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:20.229342   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:20.312116   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:20.428440   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:20.508181   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:20.727373   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:20.727558   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:20.808847   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:21.007522   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:21.227842   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:21.228018   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:21.308017   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:21.508392   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:21.726964   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:21.728528   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:21.808940   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:22.007424   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:22.294744   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:22.295052   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:22.309891   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:22.506838   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:22.728194   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:22.728419   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:22.811652   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:22.928282   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:23.007367   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:23.227777   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:23.228301   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:23.308649   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:23.508564   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:23.728103   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:23.729155   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:23.808624   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:24.007706   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:24.226403   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:24.227093   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:24.308218   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:24.507457   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:24.727547   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:24.727845   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:24.808409   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:24.928360   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:25.013520   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:25.227947   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:25.227978   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:25.308814   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:25.507178   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:25.727030   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:25.728870   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:25.808339   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:26.007711   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:26.227101   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:26.227625   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:26.309581   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:26.507736   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:26.734799   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:26.735827   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:26.809021   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:27.006897   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:27.235695   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:27.236541   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:27.308369   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:27.428997   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:27.506855   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:27.998154   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:27.998283   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:27.998793   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:28.008923   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:28.227676   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:28.228085   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:28.308586   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:28.507935   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:28.727235   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:28.727573   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:28.807802   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:29.007809   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:29.227035   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:29.227262   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:29.308591   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:29.507286   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:29.727973   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:29.728460   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:29.809110   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:29.927863   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:30.007825   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:30.226841   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:30.227466   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:30.309723   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:30.507783   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:30.727535   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:30.727876   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:30.808532   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:31.007393   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:31.226590   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:31.226916   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:31.308371   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:31.506995   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:31.727621   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:31.727942   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:31.808541   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:31.927990   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:32.006924   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:32.226311   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:32.227284   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:32.308602   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:32.507833   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:32.727489   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:32.727775   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:32.808528   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:33.009003   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:33.227490   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:33.227722   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:33.308208   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:33.507805   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:33.727298   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:33.727534   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:33.807946   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:34.008130   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:34.227582   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:34.227616   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:34.308827   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:34.427547   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:34.507852   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:34.728353   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:34.728545   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:34.808910   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:35.009293   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:35.227629   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:35.228104   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:35.308423   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:35.508122   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:35.725987   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:35.726687   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:35.807739   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:36.008016   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:36.226755   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:36.227768   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:36.308157   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:36.508301   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:36.727117   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:36.727704   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:36.808601   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:36.928194   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:37.015564   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:37.502838   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:37.503012   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:37.503573   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:37.506905   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:37.727062   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:37.727161   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:37.808664   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:38.006884   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:38.228649   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:38.228698   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:38.308726   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:38.512067   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:38.727481   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:38.727658   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:38.808265   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:39.007855   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:39.226864   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:39.227791   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:39.308635   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:39.428084   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:39.506843   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:39.726713   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:39.727285   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:39.808214   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:40.007710   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:40.226737   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:40.228086   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:40.308810   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:40.508440   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:40.727285   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:40.728877   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:40.808438   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:41.007388   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:41.226920   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:41.227146   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:41.308547   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:41.508278   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:41.727886   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:41.728559   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:41.808475   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:41.928095   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:42.007516   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:42.226939   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:42.227988   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:42.312954   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:42.783037   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:42.783561   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:42.783582   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:42.809012   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:43.007865   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:43.226767   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:43.227551   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:43.308859   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:43.508254   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:43.747971   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:43.748157   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:43.808920   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:44.008591   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:44.228920   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:44.229445   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:44.328617   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:44.427970   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:44.508139   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:44.727257   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:44.728178   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:44.808525   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:45.007605   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:45.226095   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:45.226581   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:45.308065   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:45.508306   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:45.728209   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:45.728248   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:45.809174   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:46.007280   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:46.227083   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:46.227624   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:46.307894   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:46.508211   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:46.727191   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:46.728510   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:46.809583   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:46.927496   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:47.007417   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:47.227159   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:47.227595   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:47.309061   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:47.507231   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:47.727788   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:47.728199   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:47.807794   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:48.007775   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:48.227862   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:48.228222   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:48.308550   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:48.506871   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:48.726826   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:48.726994   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:48.811680   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:48.927556   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:49.007557   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:49.228370   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:49.232894   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:49.308737   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:49.507361   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:50.013041   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:50.013700   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:50.014019   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:50.014326   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:50.227635   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:50.227840   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:50.308783   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:50.507852   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:50.728955   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:50.729109   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:50.808880   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:51.007898   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:51.231705   21883 kapi.go:107] duration metric: took 52.009052914s to wait for kubernetes.io/minikube-addons=registry ...
	I0814 16:11:51.232128   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:51.308369   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:51.427088   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:51.506962   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:51.726421   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:51.810635   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:52.008143   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:52.227232   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:52.308638   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:52.508801   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:52.729754   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:52.808553   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:53.007609   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:53.227791   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:53.309397   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:53.428942   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:53.509081   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:53.726629   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:53.810056   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:54.007518   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:54.226467   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:54.308680   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:54.507900   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:54.742125   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:54.828414   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:55.009613   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:55.228673   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:55.327675   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:55.507601   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:55.726945   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:55.808119   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:55.927013   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:56.007706   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:56.232496   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:56.508942   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:56.509556   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:56.733765   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:56.814214   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:57.010484   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:57.227004   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:57.310037   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:57.507550   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:57.726665   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:57.810044   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:57.927270   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:58.007900   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:58.226434   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:58.308928   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:58.508230   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:58.728975   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:58.808718   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:59.009541   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:59.229673   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:59.308663   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:59.508513   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:59.726940   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:59.808273   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:59.927683   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:00.013942   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:00.235451   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:00.313101   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:00.508712   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:00.727383   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:00.828093   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:01.010158   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:01.233078   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:01.325043   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:01.510045   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:01.729672   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:01.810179   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:02.007132   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:02.227439   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:02.310266   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:02.431154   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:02.509197   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:02.727286   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:02.808663   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:03.007628   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:03.226511   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:03.308835   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:03.508526   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:03.727177   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:03.810202   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:04.008818   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:04.226738   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:04.308123   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:04.465824   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:04.531436   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:04.730065   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:04.832677   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:05.007811   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:05.227274   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:05.308903   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:05.508052   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:05.727150   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:05.810089   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:06.008440   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:06.227039   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:06.309707   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:06.509388   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:06.726806   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:06.808393   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:06.927154   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:07.007582   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:07.226694   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:07.308577   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:07.508627   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:07.728092   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:07.808983   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:08.007954   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:08.226812   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:08.308652   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:08.507040   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:08.726863   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:08.807988   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:08.973543   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:09.007222   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:09.227526   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:09.309757   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:09.506949   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:09.727245   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:10.033208   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:10.033966   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:10.226506   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:10.308108   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:10.507669   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:10.727045   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:10.808383   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:11.007986   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:11.227279   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:11.308145   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:11.428082   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:11.507064   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:11.726627   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:11.809388   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:12.008074   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:12.227704   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:12.308890   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:12.506591   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:12.726831   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:12.826973   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:13.009678   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:13.227321   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:13.309032   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:13.507107   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:13.729043   21883 kapi.go:107] duration metric: took 1m14.506392378s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0814 16:12:13.809785   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:13.927978   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:14.009075   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:14.310561   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:14.513774   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:14.808323   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:15.007857   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:15.308691   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:15.507626   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:15.808349   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:15.928170   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:16.007672   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:16.308435   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:16.508580   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:16.808612   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:17.007696   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:17.309091   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:17.507099   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:17.809867   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:17.928803   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:18.010797   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:18.308603   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:18.508805   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:18.808770   21883 kapi.go:107] duration metric: took 1m16.003918275s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0814 16:12:18.810337   21883 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-521895 cluster.
	I0814 16:12:18.811598   21883 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0814 16:12:18.812714   21883 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0814 16:12:19.007750   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:19.508433   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:20.008244   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:20.427070   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:20.507532   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:21.008596   21883 kapi.go:107] duration metric: took 1m20.005779592s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0814 16:12:21.010595   21883 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, metrics-server, ingress-dns, inspektor-gadget, helm-tiller, nvidia-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0814 16:12:21.011829   21883 addons.go:510] duration metric: took 1m30.527733509s for enable addons: enabled=[storage-provisioner cloud-spanner metrics-server ingress-dns inspektor-gadget helm-tiller nvidia-device-plugin yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0814 16:12:22.427609   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:24.428953   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:26.927677   21883 pod_ready.go:92] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"True"
	I0814 16:12:26.927700   21883 pod_ready.go:81] duration metric: took 1m26.506131664s for pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace to be "Ready" ...
	I0814 16:12:26.927710   21883 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-hb8bq" in "kube-system" namespace to be "Ready" ...
	I0814 16:12:26.932052   21883 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-hb8bq" in "kube-system" namespace has status "Ready":"True"
	I0814 16:12:26.932073   21883 pod_ready.go:81] duration metric: took 4.356748ms for pod "nvidia-device-plugin-daemonset-hb8bq" in "kube-system" namespace to be "Ready" ...
	I0814 16:12:26.932091   21883 pod_ready.go:38] duration metric: took 1m27.698556013s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:12:26.932108   21883 api_server.go:52] waiting for apiserver process to appear ...
	I0814 16:12:26.932132   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 16:12:26.932176   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 16:12:26.974778   21883 cri.go:89] found id: "36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27"
	I0814 16:12:26.974796   21883 cri.go:89] found id: ""
	I0814 16:12:26.974804   21883 logs.go:276] 1 containers: [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27]
	I0814 16:12:26.974844   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:26.979166   21883 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 16:12:26.979230   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 16:12:27.019858   21883 cri.go:89] found id: "9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c"
	I0814 16:12:27.019876   21883 cri.go:89] found id: ""
	I0814 16:12:27.019883   21883 logs.go:276] 1 containers: [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c]
	I0814 16:12:27.019941   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:27.024587   21883 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 16:12:27.024655   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 16:12:27.068624   21883 cri.go:89] found id: "82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363"
	I0814 16:12:27.068649   21883 cri.go:89] found id: ""
	I0814 16:12:27.068656   21883 logs.go:276] 1 containers: [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363]
	I0814 16:12:27.068711   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:27.072802   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 16:12:27.072860   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 16:12:27.112366   21883 cri.go:89] found id: "808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1"
	I0814 16:12:27.112394   21883 cri.go:89] found id: ""
	I0814 16:12:27.112403   21883 logs.go:276] 1 containers: [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1]
	I0814 16:12:27.112493   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:27.117965   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 16:12:27.118020   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 16:12:27.161505   21883 cri.go:89] found id: "230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d"
	I0814 16:12:27.161532   21883 cri.go:89] found id: ""
	I0814 16:12:27.161542   21883 logs.go:276] 1 containers: [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d]
	I0814 16:12:27.161597   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:27.166001   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 16:12:27.166054   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 16:12:27.204852   21883 cri.go:89] found id: "59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0"
	I0814 16:12:27.204876   21883 cri.go:89] found id: ""
	I0814 16:12:27.204885   21883 logs.go:276] 1 containers: [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0]
	I0814 16:12:27.204942   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:27.208816   21883 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 16:12:27.208880   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 16:12:27.246517   21883 cri.go:89] found id: ""
	I0814 16:12:27.246539   21883 logs.go:276] 0 containers: []
	W0814 16:12:27.246547   21883 logs.go:278] No container was found matching "kindnet"
	I0814 16:12:27.246559   21883 logs.go:123] Gathering logs for kubelet ...
	I0814 16:12:27.246572   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 16:12:27.310348   21883 logs.go:138] Found kubelet problem: Aug 14 16:11:02 addons-521895 kubelet[1224]: W0814 16:11:02.702884    1224 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-521895" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-521895' and this object
	W0814 16:12:27.310529   21883 logs.go:138] Found kubelet problem: Aug 14 16:11:02 addons-521895 kubelet[1224]: E0814 16:11:02.702930    1224 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-521895\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-521895' and this object" logger="UnhandledError"
	I0814 16:12:27.337352   21883 logs.go:123] Gathering logs for dmesg ...
	I0814 16:12:27.337387   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 16:12:27.351734   21883 logs.go:123] Gathering logs for kube-scheduler [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1] ...
	I0814 16:12:27.351759   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1"
	I0814 16:12:27.395992   21883 logs.go:123] Gathering logs for kube-controller-manager [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0] ...
	I0814 16:12:27.396032   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0"
	I0814 16:12:27.456704   21883 logs.go:123] Gathering logs for CRI-O ...
	I0814 16:12:27.456738   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 16:12:28.211996   21883 logs.go:123] Gathering logs for container status ...
	I0814 16:12:28.212045   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 16:12:28.274648   21883 logs.go:123] Gathering logs for describe nodes ...
	I0814 16:12:28.274683   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 16:12:28.407272   21883 logs.go:123] Gathering logs for kube-apiserver [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27] ...
	I0814 16:12:28.407311   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27"
	I0814 16:12:28.450978   21883 logs.go:123] Gathering logs for etcd [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c] ...
	I0814 16:12:28.451007   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c"
	I0814 16:12:28.509108   21883 logs.go:123] Gathering logs for coredns [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363] ...
	I0814 16:12:28.509142   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363"
	I0814 16:12:28.549422   21883 logs.go:123] Gathering logs for kube-proxy [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d] ...
	I0814 16:12:28.549450   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d"
	I0814 16:12:28.587741   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:12:28.587766   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 16:12:28.587814   21883 out.go:239] X Problems detected in kubelet:
	W0814 16:12:28.587825   21883 out.go:239]   Aug 14 16:11:02 addons-521895 kubelet[1224]: W0814 16:11:02.702884    1224 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-521895" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-521895' and this object
	W0814 16:12:28.587832   21883 out.go:239]   Aug 14 16:11:02 addons-521895 kubelet[1224]: E0814 16:11:02.702930    1224 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-521895\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-521895' and this object" logger="UnhandledError"
	I0814 16:12:28.587841   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:12:28.587847   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:12:38.589469   21883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:12:38.610407   21883 api_server.go:72] duration metric: took 1m48.126323258s to wait for apiserver process to appear ...
	I0814 16:12:38.610437   21883 api_server.go:88] waiting for apiserver healthz status ...
	I0814 16:12:38.610470   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 16:12:38.610529   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 16:12:38.651554   21883 cri.go:89] found id: "36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27"
	I0814 16:12:38.651640   21883 cri.go:89] found id: ""
	I0814 16:12:38.651655   21883 logs.go:276] 1 containers: [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27]
	I0814 16:12:38.651706   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:38.656520   21883 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 16:12:38.656584   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 16:12:38.701852   21883 cri.go:89] found id: "9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c"
	I0814 16:12:38.701881   21883 cri.go:89] found id: ""
	I0814 16:12:38.701891   21883 logs.go:276] 1 containers: [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c]
	I0814 16:12:38.701938   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:38.705967   21883 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 16:12:38.706028   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 16:12:38.741044   21883 cri.go:89] found id: "82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363"
	I0814 16:12:38.741070   21883 cri.go:89] found id: ""
	I0814 16:12:38.741078   21883 logs.go:276] 1 containers: [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363]
	I0814 16:12:38.741121   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:38.748711   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 16:12:38.748772   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 16:12:38.783408   21883 cri.go:89] found id: "808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1"
	I0814 16:12:38.783427   21883 cri.go:89] found id: ""
	I0814 16:12:38.783434   21883 logs.go:276] 1 containers: [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1]
	I0814 16:12:38.783484   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:38.787452   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 16:12:38.787507   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 16:12:38.824371   21883 cri.go:89] found id: "230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d"
	I0814 16:12:38.824394   21883 cri.go:89] found id: ""
	I0814 16:12:38.824403   21883 logs.go:276] 1 containers: [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d]
	I0814 16:12:38.824457   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:38.828358   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 16:12:38.828426   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 16:12:38.867297   21883 cri.go:89] found id: "59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0"
	I0814 16:12:38.867316   21883 cri.go:89] found id: ""
	I0814 16:12:38.867350   21883 logs.go:276] 1 containers: [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0]
	I0814 16:12:38.867408   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:38.871178   21883 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 16:12:38.871227   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 16:12:38.917341   21883 cri.go:89] found id: ""
	I0814 16:12:38.917367   21883 logs.go:276] 0 containers: []
	W0814 16:12:38.917375   21883 logs.go:278] No container was found matching "kindnet"
	I0814 16:12:38.917382   21883 logs.go:123] Gathering logs for kube-proxy [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d] ...
	I0814 16:12:38.917396   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d"
	I0814 16:12:38.950416   21883 logs.go:123] Gathering logs for kube-controller-manager [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0] ...
	I0814 16:12:38.950449   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0"
	I0814 16:12:39.006272   21883 logs.go:123] Gathering logs for describe nodes ...
	I0814 16:12:39.006302   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 16:12:39.125296   21883 logs.go:123] Gathering logs for kube-scheduler [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1] ...
	I0814 16:12:39.125320   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1"
	I0814 16:12:39.169322   21883 logs.go:123] Gathering logs for kube-apiserver [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27] ...
	I0814 16:12:39.169351   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27"
	I0814 16:12:39.223953   21883 logs.go:123] Gathering logs for etcd [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c] ...
	I0814 16:12:39.223981   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c"
	I0814 16:12:39.292562   21883 logs.go:123] Gathering logs for coredns [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363] ...
	I0814 16:12:39.292593   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363"
	I0814 16:12:39.334707   21883 logs.go:123] Gathering logs for CRI-O ...
	I0814 16:12:39.334730   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 16:12:40.302593   21883 logs.go:123] Gathering logs for container status ...
	I0814 16:12:40.302637   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 16:12:40.355184   21883 logs.go:123] Gathering logs for kubelet ...
	I0814 16:12:40.355211   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 16:12:40.409332   21883 logs.go:138] Found kubelet problem: Aug 14 16:11:02 addons-521895 kubelet[1224]: W0814 16:11:02.702884    1224 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-521895" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-521895' and this object
	W0814 16:12:40.409513   21883 logs.go:138] Found kubelet problem: Aug 14 16:11:02 addons-521895 kubelet[1224]: E0814 16:11:02.702930    1224 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-521895\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-521895' and this object" logger="UnhandledError"
	I0814 16:12:40.437296   21883 logs.go:123] Gathering logs for dmesg ...
	I0814 16:12:40.437320   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 16:12:40.452508   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:12:40.452535   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 16:12:40.452587   21883 out.go:239] X Problems detected in kubelet:
	W0814 16:12:40.452595   21883 out.go:239]   Aug 14 16:11:02 addons-521895 kubelet[1224]: W0814 16:11:02.702884    1224 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-521895" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-521895' and this object
	W0814 16:12:40.452602   21883 out.go:239]   Aug 14 16:11:02 addons-521895 kubelet[1224]: E0814 16:11:02.702930    1224 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-521895\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-521895' and this object" logger="UnhandledError"
	I0814 16:12:40.452609   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:12:40.452615   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:12:50.453536   21883 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0814 16:12:50.460137   21883 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I0814 16:12:50.461166   21883 api_server.go:141] control plane version: v1.31.0
	I0814 16:12:50.461193   21883 api_server.go:131] duration metric: took 11.850743129s to wait for apiserver health ...
	I0814 16:12:50.461201   21883 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 16:12:50.461219   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 16:12:50.461261   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 16:12:50.504955   21883 cri.go:89] found id: "36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27"
	I0814 16:12:50.504974   21883 cri.go:89] found id: ""
	I0814 16:12:50.504981   21883 logs.go:276] 1 containers: [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27]
	I0814 16:12:50.505037   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:50.508772   21883 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 16:12:50.508842   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 16:12:50.543907   21883 cri.go:89] found id: "9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c"
	I0814 16:12:50.543926   21883 cri.go:89] found id: ""
	I0814 16:12:50.543933   21883 logs.go:276] 1 containers: [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c]
	I0814 16:12:50.543976   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:50.547941   21883 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 16:12:50.547994   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 16:12:50.585313   21883 cri.go:89] found id: "82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363"
	I0814 16:12:50.585333   21883 cri.go:89] found id: ""
	I0814 16:12:50.585345   21883 logs.go:276] 1 containers: [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363]
	I0814 16:12:50.585395   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:50.589574   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 16:12:50.589638   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 16:12:50.633893   21883 cri.go:89] found id: "808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1"
	I0814 16:12:50.633910   21883 cri.go:89] found id: ""
	I0814 16:12:50.633917   21883 logs.go:276] 1 containers: [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1]
	I0814 16:12:50.633959   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:50.638080   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 16:12:50.638131   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 16:12:50.684154   21883 cri.go:89] found id: "230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d"
	I0814 16:12:50.684183   21883 cri.go:89] found id: ""
	I0814 16:12:50.684191   21883 logs.go:276] 1 containers: [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d]
	I0814 16:12:50.684245   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:50.689888   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 16:12:50.689951   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 16:12:50.727992   21883 cri.go:89] found id: "59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0"
	I0814 16:12:50.728024   21883 cri.go:89] found id: ""
	I0814 16:12:50.728033   21883 logs.go:276] 1 containers: [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0]
	I0814 16:12:50.728087   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:50.732134   21883 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 16:12:50.732200   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 16:12:50.784221   21883 cri.go:89] found id: ""
	I0814 16:12:50.784250   21883 logs.go:276] 0 containers: []
	W0814 16:12:50.784261   21883 logs.go:278] No container was found matching "kindnet"
	I0814 16:12:50.784272   21883 logs.go:123] Gathering logs for kube-scheduler [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1] ...
	I0814 16:12:50.784286   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1"
	I0814 16:12:50.827441   21883 logs.go:123] Gathering logs for container status ...
	I0814 16:12:50.827475   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 16:12:50.873582   21883 logs.go:123] Gathering logs for dmesg ...
	I0814 16:12:50.873611   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 16:12:50.889073   21883 logs.go:123] Gathering logs for etcd [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c] ...
	I0814 16:12:50.889105   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c"
	I0814 16:12:50.948168   21883 logs.go:123] Gathering logs for kube-apiserver [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27] ...
	I0814 16:12:50.948209   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27"
	I0814 16:12:51.008579   21883 logs.go:123] Gathering logs for coredns [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363] ...
	I0814 16:12:51.008609   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363"
	I0814 16:12:51.049331   21883 logs.go:123] Gathering logs for kube-proxy [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d] ...
	I0814 16:12:51.049361   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d"
	I0814 16:12:51.092024   21883 logs.go:123] Gathering logs for kube-controller-manager [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0] ...
	I0814 16:12:51.092051   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0"
	I0814 16:12:51.150309   21883 logs.go:123] Gathering logs for CRI-O ...
	I0814 16:12:51.150342   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 16:12:51.938495   21883 logs.go:123] Gathering logs for kubelet ...
	I0814 16:12:51.938538   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 16:12:51.992141   21883 logs.go:138] Found kubelet problem: Aug 14 16:11:02 addons-521895 kubelet[1224]: W0814 16:11:02.702884    1224 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-521895" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-521895' and this object
	W0814 16:12:51.992325   21883 logs.go:138] Found kubelet problem: Aug 14 16:11:02 addons-521895 kubelet[1224]: E0814 16:11:02.702930    1224 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-521895\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-521895' and this object" logger="UnhandledError"
	I0814 16:12:52.022093   21883 logs.go:123] Gathering logs for describe nodes ...
	I0814 16:12:52.022120   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 16:12:52.162979   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:12:52.163005   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 16:12:52.163053   21883 out.go:239] X Problems detected in kubelet:
	W0814 16:12:52.163061   21883 out.go:239]   Aug 14 16:11:02 addons-521895 kubelet[1224]: W0814 16:11:02.702884    1224 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-521895" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-521895' and this object
	W0814 16:12:52.163068   21883 out.go:239]   Aug 14 16:11:02 addons-521895 kubelet[1224]: E0814 16:11:02.702930    1224 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-521895\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-521895' and this object" logger="UnhandledError"
	I0814 16:12:52.163074   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:12:52.163079   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:13:02.172734   21883 system_pods.go:59] 18 kube-system pods found
	I0814 16:13:02.172763   21883 system_pods.go:61] "coredns-6f6b679f8f-7rf58" [86130fa5-9013-49d5-bc2b-3ddf60ec917a] Running
	I0814 16:13:02.172769   21883 system_pods.go:61] "csi-hostpath-attacher-0" [47d6f4b0-c75e-4ce3-ad8c-2d53e5a19dd4] Running
	I0814 16:13:02.172772   21883 system_pods.go:61] "csi-hostpath-resizer-0" [086e76c0-a74c-44df-9be2-14402f042765] Running
	I0814 16:13:02.172776   21883 system_pods.go:61] "csi-hostpathplugin-z69n6" [e79768e2-a157-4ba9-a9de-eb6315d2700f] Running
	I0814 16:13:02.172779   21883 system_pods.go:61] "etcd-addons-521895" [749e1439-7f4b-4ac1-a469-04f8d4974517] Running
	I0814 16:13:02.172782   21883 system_pods.go:61] "kube-apiserver-addons-521895" [f509e56e-a614-4596-8e29-a6dc0c8e0430] Running
	I0814 16:13:02.172785   21883 system_pods.go:61] "kube-controller-manager-addons-521895" [b1d6cc2e-07cb-4931-a1d3-e3f4c74db5d7] Running
	I0814 16:13:02.172789   21883 system_pods.go:61] "kube-ingress-dns-minikube" [025b355f-aadc-4f6b-a2de-96a654405923] Running
	I0814 16:13:02.172791   21883 system_pods.go:61] "kube-proxy-djhvc" [ca62976b-59e3-41d9-9241-5beb8738bdb4] Running
	I0814 16:13:02.172794   21883 system_pods.go:61] "kube-scheduler-addons-521895" [b4a0abd4-d0df-48f3-b377-4b0678a452c2] Running
	I0814 16:13:02.172796   21883 system_pods.go:61] "metrics-server-8988944d9-d5x8v" [efa28343-d15d-4a26-bc87-4c5c4e6cce30] Running
	I0814 16:13:02.172799   21883 system_pods.go:61] "nvidia-device-plugin-daemonset-hb8bq" [36cab318-9976-4377-b906-b14c2be76513] Running
	I0814 16:13:02.172802   21883 system_pods.go:61] "registry-6fb4cdfc84-lbmb2" [4d1c8ab4-e3b2-4f6d-a2cb-c8356de3d1f8] Running
	I0814 16:13:02.172812   21883 system_pods.go:61] "registry-proxy-rhc59" [3a27fa71-fb85-4942-be2d-fcc16d40a026] Running
	I0814 16:13:02.172816   21883 system_pods.go:61] "snapshot-controller-56fcc65765-9v2kk" [d3a1971c-1a60-4de4-bfc0-aaa22f03cc18] Running
	I0814 16:13:02.172821   21883 system_pods.go:61] "snapshot-controller-56fcc65765-vxxwk" [6fb6d8b0-d7a1-4dee-9f27-b63f3970aa01] Running
	I0814 16:13:02.172825   21883 system_pods.go:61] "storage-provisioner" [582ce9ea-b602-4a47-b4a7-a4b7f8658252] Running
	I0814 16:13:02.172829   21883 system_pods.go:61] "tiller-deploy-b48cc5f79-tjffm" [be865efe-6514-4d4f-b8e3-6c2ccec2e6f2] Running
	I0814 16:13:02.172840   21883 system_pods.go:74] duration metric: took 11.711633618s to wait for pod list to return data ...
	I0814 16:13:02.172851   21883 default_sa.go:34] waiting for default service account to be created ...
	I0814 16:13:02.175052   21883 default_sa.go:45] found service account: "default"
	I0814 16:13:02.175067   21883 default_sa.go:55] duration metric: took 2.210207ms for default service account to be created ...
	I0814 16:13:02.175072   21883 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 16:13:02.184395   21883 system_pods.go:86] 18 kube-system pods found
	I0814 16:13:02.184423   21883 system_pods.go:89] "coredns-6f6b679f8f-7rf58" [86130fa5-9013-49d5-bc2b-3ddf60ec917a] Running
	I0814 16:13:02.184428   21883 system_pods.go:89] "csi-hostpath-attacher-0" [47d6f4b0-c75e-4ce3-ad8c-2d53e5a19dd4] Running
	I0814 16:13:02.184433   21883 system_pods.go:89] "csi-hostpath-resizer-0" [086e76c0-a74c-44df-9be2-14402f042765] Running
	I0814 16:13:02.184437   21883 system_pods.go:89] "csi-hostpathplugin-z69n6" [e79768e2-a157-4ba9-a9de-eb6315d2700f] Running
	I0814 16:13:02.184441   21883 system_pods.go:89] "etcd-addons-521895" [749e1439-7f4b-4ac1-a469-04f8d4974517] Running
	I0814 16:13:02.184446   21883 system_pods.go:89] "kube-apiserver-addons-521895" [f509e56e-a614-4596-8e29-a6dc0c8e0430] Running
	I0814 16:13:02.184450   21883 system_pods.go:89] "kube-controller-manager-addons-521895" [b1d6cc2e-07cb-4931-a1d3-e3f4c74db5d7] Running
	I0814 16:13:02.184454   21883 system_pods.go:89] "kube-ingress-dns-minikube" [025b355f-aadc-4f6b-a2de-96a654405923] Running
	I0814 16:13:02.184458   21883 system_pods.go:89] "kube-proxy-djhvc" [ca62976b-59e3-41d9-9241-5beb8738bdb4] Running
	I0814 16:13:02.184465   21883 system_pods.go:89] "kube-scheduler-addons-521895" [b4a0abd4-d0df-48f3-b377-4b0678a452c2] Running
	I0814 16:13:02.184471   21883 system_pods.go:89] "metrics-server-8988944d9-d5x8v" [efa28343-d15d-4a26-bc87-4c5c4e6cce30] Running
	I0814 16:13:02.184477   21883 system_pods.go:89] "nvidia-device-plugin-daemonset-hb8bq" [36cab318-9976-4377-b906-b14c2be76513] Running
	I0814 16:13:02.184485   21883 system_pods.go:89] "registry-6fb4cdfc84-lbmb2" [4d1c8ab4-e3b2-4f6d-a2cb-c8356de3d1f8] Running
	I0814 16:13:02.184491   21883 system_pods.go:89] "registry-proxy-rhc59" [3a27fa71-fb85-4942-be2d-fcc16d40a026] Running
	I0814 16:13:02.184497   21883 system_pods.go:89] "snapshot-controller-56fcc65765-9v2kk" [d3a1971c-1a60-4de4-bfc0-aaa22f03cc18] Running
	I0814 16:13:02.184501   21883 system_pods.go:89] "snapshot-controller-56fcc65765-vxxwk" [6fb6d8b0-d7a1-4dee-9f27-b63f3970aa01] Running
	I0814 16:13:02.184506   21883 system_pods.go:89] "storage-provisioner" [582ce9ea-b602-4a47-b4a7-a4b7f8658252] Running
	I0814 16:13:02.184510   21883 system_pods.go:89] "tiller-deploy-b48cc5f79-tjffm" [be865efe-6514-4d4f-b8e3-6c2ccec2e6f2] Running
	I0814 16:13:02.184516   21883 system_pods.go:126] duration metric: took 9.438972ms to wait for k8s-apps to be running ...
	I0814 16:13:02.184527   21883 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 16:13:02.184578   21883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:13:02.198917   21883 system_svc.go:56] duration metric: took 14.385391ms WaitForService to wait for kubelet
	I0814 16:13:02.198939   21883 kubeadm.go:582] duration metric: took 2m11.714863667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:13:02.198960   21883 node_conditions.go:102] verifying NodePressure condition ...
	I0814 16:13:02.202210   21883 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 16:13:02.202230   21883 node_conditions.go:123] node cpu capacity is 2
	I0814 16:13:02.202241   21883 node_conditions.go:105] duration metric: took 3.275262ms to run NodePressure ...
	I0814 16:13:02.202251   21883 start.go:241] waiting for startup goroutines ...
	I0814 16:13:02.202257   21883 start.go:246] waiting for cluster config update ...
	I0814 16:13:02.202273   21883 start.go:255] writing updated cluster config ...
	I0814 16:13:02.202543   21883 ssh_runner.go:195] Run: rm -f paused
	I0814 16:13:02.249953   21883 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 16:13:02.252076   21883 out.go:177] * Done! kubectl is now configured to use "addons-521895" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.000658083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652170000630407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ef59513-2cc6-402f-872d-1c14c3542af3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.001120674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36604f95-e05c-4824-83f0-c4a888c54506 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.001180369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36604f95-e05c-4824-83f0-c4a888c54506 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.001527154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47c4168722e1637a10e7c34aaec5fef9a1ace31a05ae182bb2c71a6fb7b6413a,PodSandboxId:635cbf32feea39fe8a44e2b7c25066854454f2fe11a8c77ec7fc1ba58e55ff69,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723652162861464408,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-66swq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ef06ce6-4af7-44ee-b705-3b4afb65b830,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b73bad190ae9c817aee17e2d686fad84ad9d03119f1d456cee173e028381ab,PodSandboxId:8b94b9345aeee5e3e29d23d0d035613e1b5f37d0f80b6d8f32f5a6d6e4de76c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723652024172556749,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0036eca6-d67d-4be0-8ac1-c9992f0e271c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed96fd2b54bfa0808bcdc715a349c08aaf7dc1859be3eb443813a066c53b9963,PodSandboxId:57342eaf36362618a0104852dd1bb86ff6026d34b63a310fb7c0b627b90dbe4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723651985823569019,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bd5e3c0-27de-4eb3-a
bf2-6ec6aaba4d90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729be2688faaa3c24dac63d6c90295312c5b7ab632afc98639452d4fc371830d,PodSandboxId:9b303ac14d2b0d0851c1a09ca54e3dbd74b671c399d2ff80a9899aba375841bd,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723651915151185253,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xl9cp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 372041a8-f634-423e-9690-a7d6dac51dae,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82ba4d18058542c886df9b6c32175ab1539532f41ea3224d23f5e2056286fbba,PodSandboxId:893e5bd0fcf30fbf7c33744531c843d15016b4b9c86fe1dcc666b23e7a37d62e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723651914862462299,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-mmmrm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a0e0
fd8-9a00-47e0-8171-2dbd82d64ae8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e168f3ef7e67469d2d9f4e7ff85b00db25d41c565df9e630e04f88616c903081,PodSandboxId:f68a9f2de09aac1c02fca4b5c99be25dd88b75d8f0d607a6830e1795e8777aef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723651884969924930,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-d5x8v,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: efa28343-d15d-4a26-bc87-4c5c4e6cce30,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d99a130a829f1c499079f67a07aae6c5cd523392184575f72a947658691021a,PodSandboxId:8a0577b5f645ee8536bf328a05a802e79a61956ed2feef0b58a306f32248437e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723651856405063975,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582ce9ea-b602-4a47-b4a7-a4b7f8658252,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363,PodSandboxId:d4a139ca52c61c8840dde82b12112c4321349dc23dab2603d385958c952e7ccb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723651853941681271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-7rf58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86130fa5-9013-49d5-bc2b-3ddf60ec917a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d,PodSandboxId:52036d37c7ebe82c3fe23042360bf609e0ee614eb7325de863eed3ab2e30cde8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723651851693609744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djhvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca62976b-59e3-41d9-9241-5beb8738bdb4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0,PodSandboxId:fc0a277acf0707799158ab115b03d3754d21921f2352952095c9fe662eb4a985,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87
d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723651840047578598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a8977315d43d0334fc879b7776f617,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27,PodSandboxId:110f9b94800de7bf90513c3fe06b2fe6526a01c25196a65a2ec4e96b38a0c179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d88
25f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723651839973403882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9d5a3befdc4c50408b6bfa01190b64,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c,PodSandboxId:f5e399ea8b90482e27e59d9367f526326b5d3e41b506c1ec0fb755ded6339eef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da7
92cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723651840014176916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967c8be72d3573e4e486a328526e6b08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1,PodSandboxId:cc8fe13ed7adb3752737ee3cfe0be8e73c84bb2aa633e35d33cae5706b721091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,
CreatedAt:1723651839907523946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 678be17c2681820daabe61cccf2292c1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36604f95-e05c-4824-83f0-c4a888c54506 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.036784741Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87bb12da-a074-46bf-a7c8-29583dd3694f name=/runtime.v1.RuntimeService/Version
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.036856526Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87bb12da-a074-46bf-a7c8-29583dd3694f name=/runtime.v1.RuntimeService/Version
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.037978822Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1b14b9a-8e67-4547-a035-f24433527b51 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.039476092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652170039445483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1b14b9a-8e67-4547-a035-f24433527b51 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.039995687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=094e8f5d-a83f-4f89-8252-b35ba83d8067 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.040052348Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=094e8f5d-a83f-4f89-8252-b35ba83d8067 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.040439375Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47c4168722e1637a10e7c34aaec5fef9a1ace31a05ae182bb2c71a6fb7b6413a,PodSandboxId:635cbf32feea39fe8a44e2b7c25066854454f2fe11a8c77ec7fc1ba58e55ff69,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723652162861464408,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-66swq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ef06ce6-4af7-44ee-b705-3b4afb65b830,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b73bad190ae9c817aee17e2d686fad84ad9d03119f1d456cee173e028381ab,PodSandboxId:8b94b9345aeee5e3e29d23d0d035613e1b5f37d0f80b6d8f32f5a6d6e4de76c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723652024172556749,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0036eca6-d67d-4be0-8ac1-c9992f0e271c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed96fd2b54bfa0808bcdc715a349c08aaf7dc1859be3eb443813a066c53b9963,PodSandboxId:57342eaf36362618a0104852dd1bb86ff6026d34b63a310fb7c0b627b90dbe4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723651985823569019,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bd5e3c0-27de-4eb3-a
bf2-6ec6aaba4d90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729be2688faaa3c24dac63d6c90295312c5b7ab632afc98639452d4fc371830d,PodSandboxId:9b303ac14d2b0d0851c1a09ca54e3dbd74b671c399d2ff80a9899aba375841bd,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723651915151185253,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xl9cp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 372041a8-f634-423e-9690-a7d6dac51dae,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82ba4d18058542c886df9b6c32175ab1539532f41ea3224d23f5e2056286fbba,PodSandboxId:893e5bd0fcf30fbf7c33744531c843d15016b4b9c86fe1dcc666b23e7a37d62e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723651914862462299,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-mmmrm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a0e0
fd8-9a00-47e0-8171-2dbd82d64ae8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e168f3ef7e67469d2d9f4e7ff85b00db25d41c565df9e630e04f88616c903081,PodSandboxId:f68a9f2de09aac1c02fca4b5c99be25dd88b75d8f0d607a6830e1795e8777aef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723651884969924930,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-d5x8v,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: efa28343-d15d-4a26-bc87-4c5c4e6cce30,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d99a130a829f1c499079f67a07aae6c5cd523392184575f72a947658691021a,PodSandboxId:8a0577b5f645ee8536bf328a05a802e79a61956ed2feef0b58a306f32248437e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723651856405063975,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582ce9ea-b602-4a47-b4a7-a4b7f8658252,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363,PodSandboxId:d4a139ca52c61c8840dde82b12112c4321349dc23dab2603d385958c952e7ccb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723651853941681271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-7rf58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86130fa5-9013-49d5-bc2b-3ddf60ec917a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d,PodSandboxId:52036d37c7ebe82c3fe23042360bf609e0ee614eb7325de863eed3ab2e30cde8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723651851693609744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djhvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca62976b-59e3-41d9-9241-5beb8738bdb4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0,PodSandboxId:fc0a277acf0707799158ab115b03d3754d21921f2352952095c9fe662eb4a985,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87
d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723651840047578598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a8977315d43d0334fc879b7776f617,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27,PodSandboxId:110f9b94800de7bf90513c3fe06b2fe6526a01c25196a65a2ec4e96b38a0c179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d88
25f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723651839973403882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9d5a3befdc4c50408b6bfa01190b64,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c,PodSandboxId:f5e399ea8b90482e27e59d9367f526326b5d3e41b506c1ec0fb755ded6339eef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da7
92cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723651840014176916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967c8be72d3573e4e486a328526e6b08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1,PodSandboxId:cc8fe13ed7adb3752737ee3cfe0be8e73c84bb2aa633e35d33cae5706b721091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,
CreatedAt:1723651839907523946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 678be17c2681820daabe61cccf2292c1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=094e8f5d-a83f-4f89-8252-b35ba83d8067 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.076025743Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da1003a5-380e-4ea9-a418-acf71445a5e1 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.076449133Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da1003a5-380e-4ea9-a418-acf71445a5e1 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.077641958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb6b5855-2334-498a-ac7e-2e7f2e0ba2a2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.079139050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652170079114475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb6b5855-2334-498a-ac7e-2e7f2e0ba2a2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.080035131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0dfe025-dc7f-4af9-806c-758e119c688e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.080086594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0dfe025-dc7f-4af9-806c-758e119c688e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.080498570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47c4168722e1637a10e7c34aaec5fef9a1ace31a05ae182bb2c71a6fb7b6413a,PodSandboxId:635cbf32feea39fe8a44e2b7c25066854454f2fe11a8c77ec7fc1ba58e55ff69,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723652162861464408,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-66swq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ef06ce6-4af7-44ee-b705-3b4afb65b830,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b73bad190ae9c817aee17e2d686fad84ad9d03119f1d456cee173e028381ab,PodSandboxId:8b94b9345aeee5e3e29d23d0d035613e1b5f37d0f80b6d8f32f5a6d6e4de76c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723652024172556749,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0036eca6-d67d-4be0-8ac1-c9992f0e271c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed96fd2b54bfa0808bcdc715a349c08aaf7dc1859be3eb443813a066c53b9963,PodSandboxId:57342eaf36362618a0104852dd1bb86ff6026d34b63a310fb7c0b627b90dbe4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723651985823569019,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bd5e3c0-27de-4eb3-a
bf2-6ec6aaba4d90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729be2688faaa3c24dac63d6c90295312c5b7ab632afc98639452d4fc371830d,PodSandboxId:9b303ac14d2b0d0851c1a09ca54e3dbd74b671c399d2ff80a9899aba375841bd,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723651915151185253,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xl9cp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 372041a8-f634-423e-9690-a7d6dac51dae,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82ba4d18058542c886df9b6c32175ab1539532f41ea3224d23f5e2056286fbba,PodSandboxId:893e5bd0fcf30fbf7c33744531c843d15016b4b9c86fe1dcc666b23e7a37d62e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723651914862462299,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-mmmrm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a0e0
fd8-9a00-47e0-8171-2dbd82d64ae8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e168f3ef7e67469d2d9f4e7ff85b00db25d41c565df9e630e04f88616c903081,PodSandboxId:f68a9f2de09aac1c02fca4b5c99be25dd88b75d8f0d607a6830e1795e8777aef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723651884969924930,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-d5x8v,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: efa28343-d15d-4a26-bc87-4c5c4e6cce30,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d99a130a829f1c499079f67a07aae6c5cd523392184575f72a947658691021a,PodSandboxId:8a0577b5f645ee8536bf328a05a802e79a61956ed2feef0b58a306f32248437e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723651856405063975,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582ce9ea-b602-4a47-b4a7-a4b7f8658252,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363,PodSandboxId:d4a139ca52c61c8840dde82b12112c4321349dc23dab2603d385958c952e7ccb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723651853941681271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-7rf58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86130fa5-9013-49d5-bc2b-3ddf60ec917a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d,PodSandboxId:52036d37c7ebe82c3fe23042360bf609e0ee614eb7325de863eed3ab2e30cde8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723651851693609744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djhvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca62976b-59e3-41d9-9241-5beb8738bdb4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0,PodSandboxId:fc0a277acf0707799158ab115b03d3754d21921f2352952095c9fe662eb4a985,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87
d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723651840047578598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a8977315d43d0334fc879b7776f617,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27,PodSandboxId:110f9b94800de7bf90513c3fe06b2fe6526a01c25196a65a2ec4e96b38a0c179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d88
25f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723651839973403882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9d5a3befdc4c50408b6bfa01190b64,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c,PodSandboxId:f5e399ea8b90482e27e59d9367f526326b5d3e41b506c1ec0fb755ded6339eef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da7
92cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723651840014176916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967c8be72d3573e4e486a328526e6b08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1,PodSandboxId:cc8fe13ed7adb3752737ee3cfe0be8e73c84bb2aa633e35d33cae5706b721091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,
CreatedAt:1723651839907523946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 678be17c2681820daabe61cccf2292c1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0dfe025-dc7f-4af9-806c-758e119c688e name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.111540076Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21fba2d1-1d6b-4fef-ae00-c244cc4e1c89 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.111611845Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21fba2d1-1d6b-4fef-ae00-c244cc4e1c89 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.112765188Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f870339b-1547-4e56-9edf-d18591320a7b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.113983558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652170113956181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f870339b-1547-4e56-9edf-d18591320a7b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.114619788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2330954-e447-4e59-978c-d3af5b10995d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.114689016Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2330954-e447-4e59-978c-d3af5b10995d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:16:10 addons-521895 crio[683]: time="2024-08-14 16:16:10.115016510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47c4168722e1637a10e7c34aaec5fef9a1ace31a05ae182bb2c71a6fb7b6413a,PodSandboxId:635cbf32feea39fe8a44e2b7c25066854454f2fe11a8c77ec7fc1ba58e55ff69,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723652162861464408,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-66swq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ef06ce6-4af7-44ee-b705-3b4afb65b830,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b73bad190ae9c817aee17e2d686fad84ad9d03119f1d456cee173e028381ab,PodSandboxId:8b94b9345aeee5e3e29d23d0d035613e1b5f37d0f80b6d8f32f5a6d6e4de76c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723652024172556749,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0036eca6-d67d-4be0-8ac1-c9992f0e271c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed96fd2b54bfa0808bcdc715a349c08aaf7dc1859be3eb443813a066c53b9963,PodSandboxId:57342eaf36362618a0104852dd1bb86ff6026d34b63a310fb7c0b627b90dbe4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723651985823569019,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bd5e3c0-27de-4eb3-a
bf2-6ec6aaba4d90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729be2688faaa3c24dac63d6c90295312c5b7ab632afc98639452d4fc371830d,PodSandboxId:9b303ac14d2b0d0851c1a09ca54e3dbd74b671c399d2ff80a9899aba375841bd,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723651915151185253,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xl9cp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 372041a8-f634-423e-9690-a7d6dac51dae,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82ba4d18058542c886df9b6c32175ab1539532f41ea3224d23f5e2056286fbba,PodSandboxId:893e5bd0fcf30fbf7c33744531c843d15016b4b9c86fe1dcc666b23e7a37d62e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723651914862462299,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-mmmrm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5a0e0
fd8-9a00-47e0-8171-2dbd82d64ae8,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e168f3ef7e67469d2d9f4e7ff85b00db25d41c565df9e630e04f88616c903081,PodSandboxId:f68a9f2de09aac1c02fca4b5c99be25dd88b75d8f0d607a6830e1795e8777aef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723651884969924930,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-d5x8v,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: efa28343-d15d-4a26-bc87-4c5c4e6cce30,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d99a130a829f1c499079f67a07aae6c5cd523392184575f72a947658691021a,PodSandboxId:8a0577b5f645ee8536bf328a05a802e79a61956ed2feef0b58a306f32248437e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723651856405063975,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582ce9ea-b602-4a47-b4a7-a4b7f8658252,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363,PodSandboxId:d4a139ca52c61c8840dde82b12112c4321349dc23dab2603d385958c952e7ccb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723651853941681271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-7rf58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86130fa5-9013-49d5-bc2b-3ddf60ec917a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d,PodSandboxId:52036d37c7ebe82c3fe23042360bf609e0ee614eb7325de863eed3ab2e30cde8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723651851693609744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djhvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca62976b-59e3-41d9-9241-5beb8738bdb4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0,PodSandboxId:fc0a277acf0707799158ab115b03d3754d21921f2352952095c9fe662eb4a985,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87
d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723651840047578598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a8977315d43d0334fc879b7776f617,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27,PodSandboxId:110f9b94800de7bf90513c3fe06b2fe6526a01c25196a65a2ec4e96b38a0c179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d88
25f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723651839973403882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9d5a3befdc4c50408b6bfa01190b64,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c,PodSandboxId:f5e399ea8b90482e27e59d9367f526326b5d3e41b506c1ec0fb755ded6339eef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da7
92cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723651840014176916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967c8be72d3573e4e486a328526e6b08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1,PodSandboxId:cc8fe13ed7adb3752737ee3cfe0be8e73c84bb2aa633e35d33cae5706b721091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,
CreatedAt:1723651839907523946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 678be17c2681820daabe61cccf2292c1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2330954-e447-4e59-978c-d3af5b10995d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	47c4168722e16       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   635cbf32feea3       hello-world-app-55bf9c44b4-66swq
	88b73bad190ae       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   8b94b9345aeee       nginx
	ed96fd2b54bfa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   57342eaf36362       busybox
	729be2688faaa       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             4 minutes ago       Exited              patch                     1                   9b303ac14d2b0       ingress-nginx-admission-patch-xl9cp
	82ba4d1805854       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   893e5bd0fcf30       ingress-nginx-admission-create-mmmrm
	e168f3ef7e674       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   f68a9f2de09aa       metrics-server-8988944d9-d5x8v
	8d99a130a829f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   8a0577b5f645e       storage-provisioner
	82e1477a10cc7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   d4a139ca52c61       coredns-6f6b679f8f-7rf58
	230305fe29454       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   52036d37c7ebe       kube-proxy-djhvc
	59a7d413ae30c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   fc0a277acf070       kube-controller-manager-addons-521895
	9ab2a01dd198e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   f5e399ea8b904       etcd-addons-521895
	36daf0f60c2e9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   110f9b94800de       kube-apiserver-addons-521895
	808f6e1d6cb54       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   cc8fe13ed7adb       kube-scheduler-addons-521895
	
	
	==> coredns [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363] <==
	[INFO] 10.244.0.8:51757 - 19041 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.002086208s
	[INFO] 10.244.0.8:44376 - 30158 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000170383s
	[INFO] 10.244.0.8:44376 - 3011 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000206073s
	[INFO] 10.244.0.8:43200 - 54742 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000137727s
	[INFO] 10.244.0.8:43200 - 37336 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094919s
	[INFO] 10.244.0.8:59625 - 24812 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148285s
	[INFO] 10.244.0.8:59625 - 8426 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000292179s
	[INFO] 10.244.0.8:50221 - 61004 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000081016s
	[INFO] 10.244.0.8:50221 - 27443 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000138708s
	[INFO] 10.244.0.8:39268 - 21789 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00005817s
	[INFO] 10.244.0.8:39268 - 40467 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000133597s
	[INFO] 10.244.0.8:52216 - 27921 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048015s
	[INFO] 10.244.0.8:52216 - 38935 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084291s
	[INFO] 10.244.0.8:54703 - 45434 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000063555s
	[INFO] 10.244.0.8:54703 - 57460 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000144904s
	[INFO] 10.244.0.22:58079 - 46803 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000450458s
	[INFO] 10.244.0.22:58414 - 29689 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000080494s
	[INFO] 10.244.0.22:60407 - 58327 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100183s
	[INFO] 10.244.0.22:37758 - 11954 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000197967s
	[INFO] 10.244.0.22:51074 - 50169 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076677s
	[INFO] 10.244.0.22:48023 - 37918 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00005431s
	[INFO] 10.244.0.22:32981 - 9654 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000868448s
	[INFO] 10.244.0.22:45595 - 27232 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000413993s
	[INFO] 10.244.0.24:34296 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000377869s
	[INFO] 10.244.0.24:49396 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106619s
	
	
	==> describe nodes <==
	Name:               addons-521895
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-521895
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=addons-521895
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T16_10_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-521895
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:10:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-521895
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:16:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:15:20 +0000   Wed, 14 Aug 2024 16:10:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:15:20 +0000   Wed, 14 Aug 2024 16:10:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:15:20 +0000   Wed, 14 Aug 2024 16:10:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:15:20 +0000   Wed, 14 Aug 2024 16:10:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    addons-521895
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 66c4c615e22741dfb4a932e29dcfcd60
	  System UUID:                66c4c615-e227-41df-b4a9-32e29dcfcd60
	  Boot ID:                    82d08b09-812d-45fd-ab2e-b0075dfc9acb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  default                     hello-world-app-55bf9c44b4-66swq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 coredns-6f6b679f8f-7rf58                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m20s
	  kube-system                 etcd-addons-521895                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m27s
	  kube-system                 kube-apiserver-addons-521895             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 kube-controller-manager-addons-521895    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-proxy-djhvc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-scheduler-addons-521895             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 metrics-server-8988944d9-d5x8v           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m14s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m17s  kube-proxy       
	  Normal  Starting                 5m25s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m25s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m25s  kubelet          Node addons-521895 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s  kubelet          Node addons-521895 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s  kubelet          Node addons-521895 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m24s  kubelet          Node addons-521895 status is now: NodeReady
	  Normal  RegisteredNode           5m21s  node-controller  Node addons-521895 event: Registered Node addons-521895 in Controller
	
	
	==> dmesg <==
	[  +5.692772] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.354953] kauditd_printk_skb: 32 callbacks suppressed
	[ +12.000746] kauditd_printk_skb: 20 callbacks suppressed
	[Aug14 16:12] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.543087] kauditd_printk_skb: 55 callbacks suppressed
	[  +6.043659] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.082876] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.461286] kauditd_printk_skb: 40 callbacks suppressed
	[Aug14 16:13] kauditd_printk_skb: 28 callbacks suppressed
	[ +13.100045] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.768894] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.304287] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.360381] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.735512] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.058200] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.055839] kauditd_printk_skb: 8 callbacks suppressed
	[Aug14 16:14] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.880346] kauditd_printk_skb: 34 callbacks suppressed
	[  +8.305462] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.783853] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.783087] kauditd_printk_skb: 24 callbacks suppressed
	[ +13.484468] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.097879] kauditd_printk_skb: 13 callbacks suppressed
	[Aug14 16:15] kauditd_printk_skb: 4 callbacks suppressed
	[Aug14 16:16] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c] <==
	{"level":"info","ts":"2024-08-14T16:11:56.492883Z","caller":"traceutil/trace.go:171","msg":"trace[173330722] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1020; }","duration":"151.339798ms","start":"2024-08-14T16:11:56.341531Z","end":"2024-08-14T16:11:56.492871Z","steps":["trace[173330722] 'range keys from in-memory index tree'  (duration: 151.251861ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:11:56.492903Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.857827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:11:56.492938Z","caller":"traceutil/trace.go:171","msg":"trace[1475812983] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1020; }","duration":"231.90114ms","start":"2024-08-14T16:11:56.261027Z","end":"2024-08-14T16:11:56.492928Z","steps":["trace[1475812983] 'range keys from in-memory index tree'  (duration: 231.786452ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:11:56.493042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.58381ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:11:56.493056Z","caller":"traceutil/trace.go:171","msg":"trace[2089169036] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"197.598653ms","start":"2024-08-14T16:11:56.295453Z","end":"2024-08-14T16:11:56.493051Z","steps":["trace[2089169036] 'range keys from in-memory index tree'  (duration: 197.508719ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:11:56.717488Z","caller":"traceutil/trace.go:171","msg":"trace[27944803] transaction","detail":"{read_only:false; response_revision:1021; number_of_response:1; }","duration":"222.449565ms","start":"2024-08-14T16:11:56.495011Z","end":"2024-08-14T16:11:56.717461Z","steps":["trace[27944803] 'process raft request'  (duration: 222.369686ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:11:56.717797Z","caller":"traceutil/trace.go:171","msg":"trace[615001554] linearizableReadLoop","detail":"{readStateIndex:1054; appliedIndex:1054; }","duration":"219.746386ms","start":"2024-08-14T16:11:56.498042Z","end":"2024-08-14T16:11:56.717789Z","steps":["trace[615001554] 'read index received'  (duration: 219.743127ms)","trace[615001554] 'applied index is now lower than readState.Index'  (duration: 2.506µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T16:11:56.718110Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.050868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-521895\" ","response":"range_response_count:1 size:7359"}
	{"level":"info","ts":"2024-08-14T16:11:56.718159Z","caller":"traceutil/trace.go:171","msg":"trace[1889545156] range","detail":"{range_begin:/registry/minions/addons-521895; range_end:; response_count:1; response_revision:1021; }","duration":"220.111988ms","start":"2024-08-14T16:11:56.498040Z","end":"2024-08-14T16:11:56.718152Z","steps":["trace[1889545156] 'agreement among raft nodes before linearized reading'  (duration: 219.992418ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:12:10.015781Z","caller":"traceutil/trace.go:171","msg":"trace[115314874] linearizableReadLoop","detail":"{readStateIndex:1161; appliedIndex:1160; }","duration":"221.021506ms","start":"2024-08-14T16:12:09.794737Z","end":"2024-08-14T16:12:10.015759Z","steps":["trace[115314874] 'read index received'  (duration: 217.385432ms)","trace[115314874] 'applied index is now lower than readState.Index'  (duration: 3.635095ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T16:12:10.015954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.159667ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:12:10.016061Z","caller":"traceutil/trace.go:171","msg":"trace[958517782] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1125; }","duration":"221.338992ms","start":"2024-08-14T16:12:09.794711Z","end":"2024-08-14T16:12:10.016050Z","steps":["trace[958517782] 'agreement among raft nodes before linearized reading'  (duration: 221.129195ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:12:10.016215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.967202ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-8988944d9-d5x8v\" ","response":"range_response_count:1 size:4561"}
	{"level":"info","ts":"2024-08-14T16:12:10.016317Z","caller":"traceutil/trace.go:171","msg":"trace[1282319182] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-8988944d9-d5x8v; range_end:; response_count:1; response_revision:1125; }","duration":"105.076127ms","start":"2024-08-14T16:12:09.911230Z","end":"2024-08-14T16:12:10.016306Z","steps":["trace[1282319182] 'agreement among raft nodes before linearized reading'  (duration: 104.745964ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:13:49.400333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.81349ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:13:49.400470Z","caller":"traceutil/trace.go:171","msg":"trace[329974029] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1551; }","duration":"149.027895ms","start":"2024-08-14T16:13:49.251423Z","end":"2024-08-14T16:13:49.400451Z","steps":["trace[329974029] 'range keys from in-memory index tree'  (duration: 148.765329ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:13:49.400334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.247976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:13:49.400658Z","caller":"traceutil/trace.go:171","msg":"trace[910928357] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1551; }","duration":"362.690859ms","start":"2024-08-14T16:13:49.037960Z","end":"2024-08-14T16:13:49.400651Z","steps":["trace[910928357] 'count revisions from in-memory index tree'  (duration: 362.173482ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:13:49.400690Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T16:13:49.037909Z","time spent":"362.764802ms","remote":"127.0.0.1:46420","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":27,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true "}
	{"level":"info","ts":"2024-08-14T16:14:20.788670Z","caller":"traceutil/trace.go:171","msg":"trace[1784458280] transaction","detail":"{read_only:false; response_revision:1788; number_of_response:1; }","duration":"188.584509ms","start":"2024-08-14T16:14:20.600067Z","end":"2024-08-14T16:14:20.788651Z","steps":["trace[1784458280] 'process raft request'  (duration: 188.46064ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:14:20.789103Z","caller":"traceutil/trace.go:171","msg":"trace[143668891] linearizableReadLoop","detail":"{readStateIndex:1862; appliedIndex:1861; }","duration":"121.180081ms","start":"2024-08-14T16:14:20.667910Z","end":"2024-08-14T16:14:20.789090Z","steps":["trace[143668891] 'read index received'  (duration: 120.539726ms)","trace[143668891] 'applied index is now lower than readState.Index'  (duration: 639.397µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T16:14:20.789197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.272524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/kube-system/csi-hostpath-resizer\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:14:20.789217Z","caller":"traceutil/trace.go:171","msg":"trace[1211386156] range","detail":"{range_begin:/registry/statefulsets/kube-system/csi-hostpath-resizer; range_end:; response_count:0; response_revision:1788; }","duration":"121.305069ms","start":"2024-08-14T16:14:20.667905Z","end":"2024-08-14T16:14:20.789210Z","steps":["trace[1211386156] 'agreement among raft nodes before linearized reading'  (duration: 121.249912ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:14:37.836386Z","caller":"traceutil/trace.go:171","msg":"trace[208046168] transaction","detail":"{read_only:false; response_revision:1881; number_of_response:1; }","duration":"120.759975ms","start":"2024-08-14T16:14:37.715605Z","end":"2024-08-14T16:14:37.836365Z","steps":["trace[208046168] 'process raft request'  (duration: 120.448799ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:15:11.940587Z","caller":"traceutil/trace.go:171","msg":"trace[1875794507] transaction","detail":"{read_only:false; response_revision:1978; number_of_response:1; }","duration":"106.178787ms","start":"2024-08-14T16:15:11.834394Z","end":"2024-08-14T16:15:11.940573Z","steps":["trace[1875794507] 'process raft request'  (duration: 106.045035ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:16:10 up 5 min,  0 users,  load average: 0.41, 1.10, 0.63
	Linux addons-521895 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27] <==
	E0814 16:13:12.723503       1 conn.go:339] Error on socket receive: read tcp 192.168.39.170:8443->192.168.39.1:33544: use of closed network connection
	E0814 16:13:12.915570       1 conn.go:339] Error on socket receive: read tcp 192.168.39.170:8443->192.168.39.1:33570: use of closed network connection
	I0814 16:13:37.481555       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0814 16:13:38.527236       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0814 16:13:39.601821       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0814 16:13:39.786065       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.213.200"}
	I0814 16:13:57.570930       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0814 16:14:18.065898       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0814 16:14:21.449363       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:14:21.449421       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:14:21.480183       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:14:21.481396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:14:21.502996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:14:21.503109       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:14:21.541863       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:14:21.541992       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:14:21.578885       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:14:21.578930       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0814 16:14:22.543009       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0814 16:14:22.579352       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0814 16:14:22.643825       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0814 16:14:33.769632       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.133.76"}
	E0814 16:14:55.092243       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.170:8443->10.244.0.32:40550: read: connection reset by peer
	I0814 16:16:00.170212       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.235.38"}
	E0814 16:16:02.286020       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0] <==
	I0814 16:14:56.846396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="4.258µs"
	W0814 16:14:57.388640       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:14:57.388678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:15:03.748100       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:15:03.748163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0814 16:15:20.705135       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-521895"
	W0814 16:15:25.862049       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:15:25.862108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:15:28.403921       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:15:28.404061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:15:34.984888       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:15:34.984948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:15:45.909391       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:15:45.909467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0814 16:16:00.002438       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="37.73985ms"
	I0814 16:16:00.019025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="16.482243ms"
	I0814 16:16:00.040053       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="20.933638ms"
	I0814 16:16:00.040247       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="73.024µs"
	I0814 16:16:02.180930       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0814 16:16:02.185666       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0814 16:16:02.197058       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7559cbf597" duration="3.802µs"
	I0814 16:16:03.614861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="7.977899ms"
	I0814 16:16:03.616906       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="71.434µs"
	W0814 16:16:08.718941       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:16:08.719028       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 16:10:52.651643       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 16:10:52.661913       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.170"]
	E0814 16:10:52.661991       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 16:10:52.747088       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 16:10:52.747138       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 16:10:52.747168       1 server_linux.go:169] "Using iptables Proxier"
	I0814 16:10:52.749869       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 16:10:52.750191       1 server.go:483] "Version info" version="v1.31.0"
	I0814 16:10:52.750216       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:10:52.752042       1 config.go:197] "Starting service config controller"
	I0814 16:10:52.752081       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 16:10:52.752101       1 config.go:104] "Starting endpoint slice config controller"
	I0814 16:10:52.752106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 16:10:52.752612       1 config.go:326] "Starting node config controller"
	I0814 16:10:52.752640       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 16:10:52.852251       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 16:10:52.852337       1 shared_informer.go:320] Caches are synced for service config
	I0814 16:10:52.854608       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1] <==
	W0814 16:10:42.832853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:42.832884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:42.832969       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 16:10:42.832999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:42.833099       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 16:10:42.833130       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:43.695970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:43.696022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:43.703334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 16:10:43.703374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:43.734819       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 16:10:43.734940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:43.777951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 16:10:43.778091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:43.865104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 16:10:43.865207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:44.009179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:44.009426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:44.010004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 16:10:44.010167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:44.013458       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 16:10:44.013495       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:44.098686       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 16:10:44.098854       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0814 16:10:47.124245       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 16:15:59 addons-521895 kubelet[1224]: I0814 16:15:59.994497    1224 memory_manager.go:354] "RemoveStaleState removing state" podUID="be865efe-6514-4d4f-b8e3-6c2ccec2e6f2" containerName="tiller"
	Aug 14 16:16:00 addons-521895 kubelet[1224]: I0814 16:16:00.136915    1224 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th552\" (UniqueName: \"kubernetes.io/projected/3ef06ce6-4af7-44ee-b705-3b4afb65b830-kube-api-access-th552\") pod \"hello-world-app-55bf9c44b4-66swq\" (UID: \"3ef06ce6-4af7-44ee-b705-3b4afb65b830\") " pod="default/hello-world-app-55bf9c44b4-66swq"
	Aug 14 16:16:01 addons-521895 kubelet[1224]: I0814 16:16:01.144143    1224 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbmz2\" (UniqueName: \"kubernetes.io/projected/025b355f-aadc-4f6b-a2de-96a654405923-kube-api-access-fbmz2\") pod \"025b355f-aadc-4f6b-a2de-96a654405923\" (UID: \"025b355f-aadc-4f6b-a2de-96a654405923\") "
	Aug 14 16:16:01 addons-521895 kubelet[1224]: I0814 16:16:01.147116    1224 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025b355f-aadc-4f6b-a2de-96a654405923-kube-api-access-fbmz2" (OuterVolumeSpecName: "kube-api-access-fbmz2") pod "025b355f-aadc-4f6b-a2de-96a654405923" (UID: "025b355f-aadc-4f6b-a2de-96a654405923"). InnerVolumeSpecName "kube-api-access-fbmz2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 14 16:16:01 addons-521895 kubelet[1224]: I0814 16:16:01.244553    1224 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fbmz2\" (UniqueName: \"kubernetes.io/projected/025b355f-aadc-4f6b-a2de-96a654405923-kube-api-access-fbmz2\") on node \"addons-521895\" DevicePath \"\""
	Aug 14 16:16:01 addons-521895 kubelet[1224]: I0814 16:16:01.581194    1224 scope.go:117] "RemoveContainer" containerID="c2496439201f197d2ea9c10d57edd41f9ca596df11ed9ecd86a1f42e8b29c428"
	Aug 14 16:16:01 addons-521895 kubelet[1224]: I0814 16:16:01.603886    1224 scope.go:117] "RemoveContainer" containerID="c2496439201f197d2ea9c10d57edd41f9ca596df11ed9ecd86a1f42e8b29c428"
	Aug 14 16:16:01 addons-521895 kubelet[1224]: E0814 16:16:01.604437    1224 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2496439201f197d2ea9c10d57edd41f9ca596df11ed9ecd86a1f42e8b29c428\": container with ID starting with c2496439201f197d2ea9c10d57edd41f9ca596df11ed9ecd86a1f42e8b29c428 not found: ID does not exist" containerID="c2496439201f197d2ea9c10d57edd41f9ca596df11ed9ecd86a1f42e8b29c428"
	Aug 14 16:16:01 addons-521895 kubelet[1224]: I0814 16:16:01.604486    1224 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2496439201f197d2ea9c10d57edd41f9ca596df11ed9ecd86a1f42e8b29c428"} err="failed to get container status \"c2496439201f197d2ea9c10d57edd41f9ca596df11ed9ecd86a1f42e8b29c428\": rpc error: code = NotFound desc = could not find container \"c2496439201f197d2ea9c10d57edd41f9ca596df11ed9ecd86a1f42e8b29c428\": container with ID starting with c2496439201f197d2ea9c10d57edd41f9ca596df11ed9ecd86a1f42e8b29c428 not found: ID does not exist"
	Aug 14 16:16:03 addons-521895 kubelet[1224]: I0814 16:16:03.524546    1224 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025b355f-aadc-4f6b-a2de-96a654405923" path="/var/lib/kubelet/pods/025b355f-aadc-4f6b-a2de-96a654405923/volumes"
	Aug 14 16:16:03 addons-521895 kubelet[1224]: I0814 16:16:03.524946    1224 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372041a8-f634-423e-9690-a7d6dac51dae" path="/var/lib/kubelet/pods/372041a8-f634-423e-9690-a7d6dac51dae/volumes"
	Aug 14 16:16:03 addons-521895 kubelet[1224]: I0814 16:16:03.525729    1224 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a0e0fd8-9a00-47e0-8171-2dbd82d64ae8" path="/var/lib/kubelet/pods/5a0e0fd8-9a00-47e0-8171-2dbd82d64ae8/volumes"
	Aug 14 16:16:05 addons-521895 kubelet[1224]: I0814 16:16:05.483708    1224 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65fa1487-a3bc-4b6e-95d2-bc7206d2434c-webhook-cert\") pod \"65fa1487-a3bc-4b6e-95d2-bc7206d2434c\" (UID: \"65fa1487-a3bc-4b6e-95d2-bc7206d2434c\") "
	Aug 14 16:16:05 addons-521895 kubelet[1224]: I0814 16:16:05.484135    1224 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zppw8\" (UniqueName: \"kubernetes.io/projected/65fa1487-a3bc-4b6e-95d2-bc7206d2434c-kube-api-access-zppw8\") pod \"65fa1487-a3bc-4b6e-95d2-bc7206d2434c\" (UID: \"65fa1487-a3bc-4b6e-95d2-bc7206d2434c\") "
	Aug 14 16:16:05 addons-521895 kubelet[1224]: I0814 16:16:05.485891    1224 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65fa1487-a3bc-4b6e-95d2-bc7206d2434c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "65fa1487-a3bc-4b6e-95d2-bc7206d2434c" (UID: "65fa1487-a3bc-4b6e-95d2-bc7206d2434c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 14 16:16:05 addons-521895 kubelet[1224]: I0814 16:16:05.487512    1224 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65fa1487-a3bc-4b6e-95d2-bc7206d2434c-kube-api-access-zppw8" (OuterVolumeSpecName: "kube-api-access-zppw8") pod "65fa1487-a3bc-4b6e-95d2-bc7206d2434c" (UID: "65fa1487-a3bc-4b6e-95d2-bc7206d2434c"). InnerVolumeSpecName "kube-api-access-zppw8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 14 16:16:05 addons-521895 kubelet[1224]: I0814 16:16:05.525178    1224 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65fa1487-a3bc-4b6e-95d2-bc7206d2434c" path="/var/lib/kubelet/pods/65fa1487-a3bc-4b6e-95d2-bc7206d2434c/volumes"
	Aug 14 16:16:05 addons-521895 kubelet[1224]: I0814 16:16:05.584928    1224 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65fa1487-a3bc-4b6e-95d2-bc7206d2434c-webhook-cert\") on node \"addons-521895\" DevicePath \"\""
	Aug 14 16:16:05 addons-521895 kubelet[1224]: I0814 16:16:05.584965    1224 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zppw8\" (UniqueName: \"kubernetes.io/projected/65fa1487-a3bc-4b6e-95d2-bc7206d2434c-kube-api-access-zppw8\") on node \"addons-521895\" DevicePath \"\""
	Aug 14 16:16:05 addons-521895 kubelet[1224]: I0814 16:16:05.604963    1224 scope.go:117] "RemoveContainer" containerID="b7ac6c5968398fd73aaaba5c7e73107a330bb1dfdf21b1986e2e29b531313a2e"
	Aug 14 16:16:05 addons-521895 kubelet[1224]: I0814 16:16:05.619751    1224 scope.go:117] "RemoveContainer" containerID="b7ac6c5968398fd73aaaba5c7e73107a330bb1dfdf21b1986e2e29b531313a2e"
	Aug 14 16:16:05 addons-521895 kubelet[1224]: E0814 16:16:05.620238    1224 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7ac6c5968398fd73aaaba5c7e73107a330bb1dfdf21b1986e2e29b531313a2e\": container with ID starting with b7ac6c5968398fd73aaaba5c7e73107a330bb1dfdf21b1986e2e29b531313a2e not found: ID does not exist" containerID="b7ac6c5968398fd73aaaba5c7e73107a330bb1dfdf21b1986e2e29b531313a2e"
	Aug 14 16:16:05 addons-521895 kubelet[1224]: I0814 16:16:05.620321    1224 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7ac6c5968398fd73aaaba5c7e73107a330bb1dfdf21b1986e2e29b531313a2e"} err="failed to get container status \"b7ac6c5968398fd73aaaba5c7e73107a330bb1dfdf21b1986e2e29b531313a2e\": rpc error: code = NotFound desc = could not find container \"b7ac6c5968398fd73aaaba5c7e73107a330bb1dfdf21b1986e2e29b531313a2e\": container with ID starting with b7ac6c5968398fd73aaaba5c7e73107a330bb1dfdf21b1986e2e29b531313a2e not found: ID does not exist"
	Aug 14 16:16:05 addons-521895 kubelet[1224]: E0814 16:16:05.935738    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652165935248049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:16:05 addons-521895 kubelet[1224]: E0814 16:16:05.935770    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652165935248049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8d99a130a829f1c499079f67a07aae6c5cd523392184575f72a947658691021a] <==
	I0814 16:10:56.980080       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 16:10:57.074006       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 16:10:57.074083       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 16:10:57.095629       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 16:10:57.095836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-521895_7d037236-7a9e-4cc2-b0a9-01f811657084!
	I0814 16:10:57.122404       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f0a62dbb-6b92-4b4d-ba20-4ea5c75c1d2d", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-521895_7d037236-7a9e-4cc2-b0a9-01f811657084 became leader
	I0814 16:10:57.196301       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-521895_7d037236-7a9e-4cc2-b0a9-01f811657084!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-521895 -n addons-521895
helpers_test.go:261: (dbg) Run:  kubectl --context addons-521895 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (321.03s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.899595ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-d5x8v" [efa28343-d15d-4a26-bc87-4c5c4e6cce30] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003028652s
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (75.554208ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 2m37.320769472s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (64.315457ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 2m40.960440392s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (70.212266ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 2m44.005623077s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (64.221402ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 2m48.20932291s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (68.665406ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 2m57.586126683s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (62.547106ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 3m9.740797635s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (80.639792ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 3m25.462085338s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (61.402225ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 3m50.306254994s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (59.705024ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 4m23.354468665s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (63.802784ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 5m15.860783157s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (60.582176ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 6m10.204988826s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (61.60303ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 6m57.96903694s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-521895 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-521895 top pods -n kube-system: exit status 1 (62.346097ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-7rf58, age: 7m49.680952779s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-521895 -n addons-521895
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-521895 logs -n 25: (1.216079875s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-495471                                                                     | download-only-495471 | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC | 14 Aug 24 16:10 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-629887 | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC |                     |
	|         | binary-mirror-629887                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46569                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-629887                                                                     | binary-mirror-629887 | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC | 14 Aug 24 16:10 UTC |
	| addons  | disable dashboard -p                                                                        | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC |                     |
	|         | addons-521895                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC |                     |
	|         | addons-521895                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-521895 --wait=true                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:10 UTC | 14 Aug 24 16:13 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | addons-521895                                                                               |                      |         |         |                     |                     |
	| ip      | addons-521895 ip                                                                            | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC | 14 Aug 24 16:13 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-521895 ssh curl -s                                                                   | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:13 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-521895 ssh cat                                                                       | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | /opt/local-path-provisioner/pvc-230f268c-e9fb-47c8-a734-e535e5b8b6a9_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-521895 addons                                                                        | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-521895 addons                                                                        | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | -p addons-521895                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | addons-521895                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | -p addons-521895                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:14 UTC | 14 Aug 24 16:14 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-521895 ip                                                                            | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:16 UTC | 14 Aug 24 16:16 UTC |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:16 UTC | 14 Aug 24 16:16 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-521895 addons disable                                                                | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:16 UTC | 14 Aug 24 16:16 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-521895 addons                                                                        | addons-521895        | jenkins | v1.33.1 | 14 Aug 24 16:18 UTC | 14 Aug 24 16:18 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 16:10:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 16:10:06.091073   21883 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:10:06.091202   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:10:06.091212   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:10:06.091217   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:10:06.091439   21883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:10:06.092072   21883 out.go:298] Setting JSON to false
	I0814 16:10:06.092936   21883 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3150,"bootTime":1723648656,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:10:06.092990   21883 start.go:139] virtualization: kvm guest
	I0814 16:10:06.095031   21883 out.go:177] * [addons-521895] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:10:06.096420   21883 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:10:06.096420   21883 notify.go:220] Checking for updates...
	I0814 16:10:06.097937   21883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:10:06.099288   21883 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:10:06.100579   21883 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:10:06.101794   21883 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:10:06.103045   21883 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:10:06.104357   21883 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:10:06.134990   21883 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 16:10:06.136076   21883 start.go:297] selected driver: kvm2
	I0814 16:10:06.136097   21883 start.go:901] validating driver "kvm2" against <nil>
	I0814 16:10:06.136108   21883 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:10:06.136812   21883 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:10:06.136886   21883 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 16:10:06.151588   21883 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 16:10:06.151640   21883 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 16:10:06.151879   21883 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:10:06.151953   21883 cni.go:84] Creating CNI manager for ""
	I0814 16:10:06.151969   21883 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 16:10:06.151981   21883 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 16:10:06.152044   21883 start.go:340] cluster config:
	{Name:addons-521895 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-521895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:10:06.152159   21883 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:10:06.153813   21883 out.go:177] * Starting "addons-521895" primary control-plane node in "addons-521895" cluster
	I0814 16:10:06.155036   21883 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:10:06.155072   21883 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:10:06.155087   21883 cache.go:56] Caching tarball of preloaded images
	I0814 16:10:06.155184   21883 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 16:10:06.155198   21883 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 16:10:06.155566   21883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/config.json ...
	I0814 16:10:06.155590   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/config.json: {Name:mk2c74c8b25cb0d239f5c19085340188d3cc7de6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:06.155739   21883 start.go:360] acquireMachinesLock for addons-521895: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 16:10:06.155795   21883 start.go:364] duration metric: took 40.446µs to acquireMachinesLock for "addons-521895"
	I0814 16:10:06.155816   21883 start.go:93] Provisioning new machine with config: &{Name:addons-521895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-521895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:10:06.155887   21883 start.go:125] createHost starting for "" (driver="kvm2")
	I0814 16:10:06.157473   21883 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0814 16:10:06.157631   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:06.157680   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:06.171750   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
	I0814 16:10:06.172210   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:06.172751   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:06.172771   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:06.173130   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:06.173373   21883 main.go:141] libmachine: (addons-521895) Calling .GetMachineName
	I0814 16:10:06.173580   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:06.173769   21883 start.go:159] libmachine.API.Create for "addons-521895" (driver="kvm2")
	I0814 16:10:06.173804   21883 client.go:168] LocalClient.Create starting
	I0814 16:10:06.173856   21883 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem
	I0814 16:10:06.373032   21883 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem
	I0814 16:10:06.467215   21883 main.go:141] libmachine: Running pre-create checks...
	I0814 16:10:06.467238   21883 main.go:141] libmachine: (addons-521895) Calling .PreCreateCheck
	I0814 16:10:06.467777   21883 main.go:141] libmachine: (addons-521895) Calling .GetConfigRaw
	I0814 16:10:06.468187   21883 main.go:141] libmachine: Creating machine...
	I0814 16:10:06.468201   21883 main.go:141] libmachine: (addons-521895) Calling .Create
	I0814 16:10:06.468354   21883 main.go:141] libmachine: (addons-521895) Creating KVM machine...
	I0814 16:10:06.469538   21883 main.go:141] libmachine: (addons-521895) DBG | found existing default KVM network
	I0814 16:10:06.470305   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:06.470159   21904 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0814 16:10:06.470332   21883 main.go:141] libmachine: (addons-521895) DBG | created network xml: 
	I0814 16:10:06.470345   21883 main.go:141] libmachine: (addons-521895) DBG | <network>
	I0814 16:10:06.470356   21883 main.go:141] libmachine: (addons-521895) DBG |   <name>mk-addons-521895</name>
	I0814 16:10:06.470369   21883 main.go:141] libmachine: (addons-521895) DBG |   <dns enable='no'/>
	I0814 16:10:06.470379   21883 main.go:141] libmachine: (addons-521895) DBG |   
	I0814 16:10:06.470391   21883 main.go:141] libmachine: (addons-521895) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0814 16:10:06.470401   21883 main.go:141] libmachine: (addons-521895) DBG |     <dhcp>
	I0814 16:10:06.470411   21883 main.go:141] libmachine: (addons-521895) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0814 16:10:06.470424   21883 main.go:141] libmachine: (addons-521895) DBG |     </dhcp>
	I0814 16:10:06.470430   21883 main.go:141] libmachine: (addons-521895) DBG |   </ip>
	I0814 16:10:06.470435   21883 main.go:141] libmachine: (addons-521895) DBG |   
	I0814 16:10:06.470441   21883 main.go:141] libmachine: (addons-521895) DBG | </network>
	I0814 16:10:06.470447   21883 main.go:141] libmachine: (addons-521895) DBG | 
	I0814 16:10:06.475921   21883 main.go:141] libmachine: (addons-521895) DBG | trying to create private KVM network mk-addons-521895 192.168.39.0/24...
	I0814 16:10:06.539319   21883 main.go:141] libmachine: (addons-521895) DBG | private KVM network mk-addons-521895 192.168.39.0/24 created
	I0814 16:10:06.539383   21883 main.go:141] libmachine: (addons-521895) Setting up store path in /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895 ...
	I0814 16:10:06.539416   21883 main.go:141] libmachine: (addons-521895) Building disk image from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 16:10:06.539489   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:06.539365   21904 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:10:06.539609   21883 main.go:141] libmachine: (addons-521895) Downloading /home/jenkins/minikube-integration/19446-13977/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 16:10:06.790932   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:06.790767   21904 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa...
	I0814 16:10:07.007275   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:07.007131   21904 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/addons-521895.rawdisk...
	I0814 16:10:07.007339   21883 main.go:141] libmachine: (addons-521895) DBG | Writing magic tar header
	I0814 16:10:07.007398   21883 main.go:141] libmachine: (addons-521895) DBG | Writing SSH key tar header
	I0814 16:10:07.007436   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:07.007248   21904 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895 ...
	I0814 16:10:07.007477   21883 main.go:141] libmachine: (addons-521895) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895 (perms=drwx------)
	I0814 16:10:07.007490   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895
	I0814 16:10:07.007497   21883 main.go:141] libmachine: (addons-521895) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines (perms=drwxr-xr-x)
	I0814 16:10:07.007509   21883 main.go:141] libmachine: (addons-521895) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube (perms=drwxr-xr-x)
	I0814 16:10:07.007518   21883 main.go:141] libmachine: (addons-521895) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977 (perms=drwxrwxr-x)
	I0814 16:10:07.007532   21883 main.go:141] libmachine: (addons-521895) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 16:10:07.007542   21883 main.go:141] libmachine: (addons-521895) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 16:10:07.007555   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines
	I0814 16:10:07.007571   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:10:07.007583   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977
	I0814 16:10:07.007600   21883 main.go:141] libmachine: (addons-521895) Creating domain...
	I0814 16:10:07.007610   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 16:10:07.007624   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home/jenkins
	I0814 16:10:07.007632   21883 main.go:141] libmachine: (addons-521895) DBG | Checking permissions on dir: /home
	I0814 16:10:07.007647   21883 main.go:141] libmachine: (addons-521895) DBG | Skipping /home - not owner
	I0814 16:10:07.008634   21883 main.go:141] libmachine: (addons-521895) define libvirt domain using xml: 
	I0814 16:10:07.008658   21883 main.go:141] libmachine: (addons-521895) <domain type='kvm'>
	I0814 16:10:07.008669   21883 main.go:141] libmachine: (addons-521895)   <name>addons-521895</name>
	I0814 16:10:07.008677   21883 main.go:141] libmachine: (addons-521895)   <memory unit='MiB'>4000</memory>
	I0814 16:10:07.008706   21883 main.go:141] libmachine: (addons-521895)   <vcpu>2</vcpu>
	I0814 16:10:07.008729   21883 main.go:141] libmachine: (addons-521895)   <features>
	I0814 16:10:07.008736   21883 main.go:141] libmachine: (addons-521895)     <acpi/>
	I0814 16:10:07.008743   21883 main.go:141] libmachine: (addons-521895)     <apic/>
	I0814 16:10:07.008749   21883 main.go:141] libmachine: (addons-521895)     <pae/>
	I0814 16:10:07.008755   21883 main.go:141] libmachine: (addons-521895)     
	I0814 16:10:07.008763   21883 main.go:141] libmachine: (addons-521895)   </features>
	I0814 16:10:07.008768   21883 main.go:141] libmachine: (addons-521895)   <cpu mode='host-passthrough'>
	I0814 16:10:07.008777   21883 main.go:141] libmachine: (addons-521895)   
	I0814 16:10:07.008791   21883 main.go:141] libmachine: (addons-521895)   </cpu>
	I0814 16:10:07.008802   21883 main.go:141] libmachine: (addons-521895)   <os>
	I0814 16:10:07.008833   21883 main.go:141] libmachine: (addons-521895)     <type>hvm</type>
	I0814 16:10:07.008851   21883 main.go:141] libmachine: (addons-521895)     <boot dev='cdrom'/>
	I0814 16:10:07.008865   21883 main.go:141] libmachine: (addons-521895)     <boot dev='hd'/>
	I0814 16:10:07.008877   21883 main.go:141] libmachine: (addons-521895)     <bootmenu enable='no'/>
	I0814 16:10:07.008889   21883 main.go:141] libmachine: (addons-521895)   </os>
	I0814 16:10:07.008900   21883 main.go:141] libmachine: (addons-521895)   <devices>
	I0814 16:10:07.008909   21883 main.go:141] libmachine: (addons-521895)     <disk type='file' device='cdrom'>
	I0814 16:10:07.008928   21883 main.go:141] libmachine: (addons-521895)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/boot2docker.iso'/>
	I0814 16:10:07.008937   21883 main.go:141] libmachine: (addons-521895)       <target dev='hdc' bus='scsi'/>
	I0814 16:10:07.008944   21883 main.go:141] libmachine: (addons-521895)       <readonly/>
	I0814 16:10:07.008956   21883 main.go:141] libmachine: (addons-521895)     </disk>
	I0814 16:10:07.008968   21883 main.go:141] libmachine: (addons-521895)     <disk type='file' device='disk'>
	I0814 16:10:07.008982   21883 main.go:141] libmachine: (addons-521895)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 16:10:07.009000   21883 main.go:141] libmachine: (addons-521895)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/addons-521895.rawdisk'/>
	I0814 16:10:07.009012   21883 main.go:141] libmachine: (addons-521895)       <target dev='hda' bus='virtio'/>
	I0814 16:10:07.009020   21883 main.go:141] libmachine: (addons-521895)     </disk>
	I0814 16:10:07.009028   21883 main.go:141] libmachine: (addons-521895)     <interface type='network'>
	I0814 16:10:07.009040   21883 main.go:141] libmachine: (addons-521895)       <source network='mk-addons-521895'/>
	I0814 16:10:07.009053   21883 main.go:141] libmachine: (addons-521895)       <model type='virtio'/>
	I0814 16:10:07.009067   21883 main.go:141] libmachine: (addons-521895)     </interface>
	I0814 16:10:07.009080   21883 main.go:141] libmachine: (addons-521895)     <interface type='network'>
	I0814 16:10:07.009090   21883 main.go:141] libmachine: (addons-521895)       <source network='default'/>
	I0814 16:10:07.009099   21883 main.go:141] libmachine: (addons-521895)       <model type='virtio'/>
	I0814 16:10:07.009107   21883 main.go:141] libmachine: (addons-521895)     </interface>
	I0814 16:10:07.009114   21883 main.go:141] libmachine: (addons-521895)     <serial type='pty'>
	I0814 16:10:07.009124   21883 main.go:141] libmachine: (addons-521895)       <target port='0'/>
	I0814 16:10:07.009138   21883 main.go:141] libmachine: (addons-521895)     </serial>
	I0814 16:10:07.009153   21883 main.go:141] libmachine: (addons-521895)     <console type='pty'>
	I0814 16:10:07.009167   21883 main.go:141] libmachine: (addons-521895)       <target type='serial' port='0'/>
	I0814 16:10:07.009174   21883 main.go:141] libmachine: (addons-521895)     </console>
	I0814 16:10:07.009179   21883 main.go:141] libmachine: (addons-521895)     <rng model='virtio'>
	I0814 16:10:07.009187   21883 main.go:141] libmachine: (addons-521895)       <backend model='random'>/dev/random</backend>
	I0814 16:10:07.009192   21883 main.go:141] libmachine: (addons-521895)     </rng>
	I0814 16:10:07.009205   21883 main.go:141] libmachine: (addons-521895)     
	I0814 16:10:07.009213   21883 main.go:141] libmachine: (addons-521895)     
	I0814 16:10:07.009217   21883 main.go:141] libmachine: (addons-521895)   </devices>
	I0814 16:10:07.009224   21883 main.go:141] libmachine: (addons-521895) </domain>
	I0814 16:10:07.009230   21883 main.go:141] libmachine: (addons-521895) 
	I0814 16:10:07.014772   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:37:48:6c in network default
	I0814 16:10:07.015343   21883 main.go:141] libmachine: (addons-521895) Ensuring networks are active...
	I0814 16:10:07.015368   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:07.015996   21883 main.go:141] libmachine: (addons-521895) Ensuring network default is active
	I0814 16:10:07.016257   21883 main.go:141] libmachine: (addons-521895) Ensuring network mk-addons-521895 is active
	I0814 16:10:07.016769   21883 main.go:141] libmachine: (addons-521895) Getting domain xml...
	I0814 16:10:07.017354   21883 main.go:141] libmachine: (addons-521895) Creating domain...
	I0814 16:10:08.505220   21883 main.go:141] libmachine: (addons-521895) Waiting to get IP...
	I0814 16:10:08.505999   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:08.506400   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:08.506468   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:08.506393   21904 retry.go:31] will retry after 213.210861ms: waiting for machine to come up
	I0814 16:10:08.720879   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:08.721362   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:08.721392   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:08.721320   21904 retry.go:31] will retry after 336.947709ms: waiting for machine to come up
	I0814 16:10:09.059913   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:09.060313   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:09.060336   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:09.060276   21904 retry.go:31] will retry after 460.065602ms: waiting for machine to come up
	I0814 16:10:09.522017   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:09.522500   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:09.522521   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:09.522458   21904 retry.go:31] will retry after 501.941374ms: waiting for machine to come up
	I0814 16:10:10.026142   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:10.026609   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:10.026636   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:10.026543   21904 retry.go:31] will retry after 597.530335ms: waiting for machine to come up
	I0814 16:10:10.625427   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:10.625850   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:10.625883   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:10.625803   21904 retry.go:31] will retry after 663.235732ms: waiting for machine to come up
	I0814 16:10:11.290110   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:11.290474   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:11.290503   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:11.290436   21904 retry.go:31] will retry after 724.896752ms: waiting for machine to come up
	I0814 16:10:12.017557   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:12.017965   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:12.018000   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:12.017910   21904 retry.go:31] will retry after 1.368272068s: waiting for machine to come up
	I0814 16:10:13.388301   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:13.388796   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:13.388822   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:13.388762   21904 retry.go:31] will retry after 1.65786077s: waiting for machine to come up
	I0814 16:10:15.048569   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:15.048973   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:15.048995   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:15.048927   21904 retry.go:31] will retry after 1.882924604s: waiting for machine to come up
	I0814 16:10:16.933623   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:16.934070   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:16.934096   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:16.934015   21904 retry.go:31] will retry after 2.299175394s: waiting for machine to come up
	I0814 16:10:19.236440   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:19.236924   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:19.236953   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:19.236889   21904 retry.go:31] will retry after 2.528572299s: waiting for machine to come up
	I0814 16:10:21.766926   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:21.767229   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:21.767249   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:21.767188   21904 retry.go:31] will retry after 3.003549239s: waiting for machine to come up
	I0814 16:10:24.774309   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:24.774732   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find current IP address of domain addons-521895 in network mk-addons-521895
	I0814 16:10:24.774754   21883 main.go:141] libmachine: (addons-521895) DBG | I0814 16:10:24.774697   21904 retry.go:31] will retry after 3.710828731s: waiting for machine to come up
	I0814 16:10:28.488500   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.488945   21883 main.go:141] libmachine: (addons-521895) Found IP for machine: 192.168.39.170
	I0814 16:10:28.488968   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has current primary IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.488974   21883 main.go:141] libmachine: (addons-521895) Reserving static IP address...
	I0814 16:10:28.489472   21883 main.go:141] libmachine: (addons-521895) DBG | unable to find host DHCP lease matching {name: "addons-521895", mac: "52:54:00:8a:83:8f", ip: "192.168.39.170"} in network mk-addons-521895
	I0814 16:10:28.558975   21883 main.go:141] libmachine: (addons-521895) DBG | Getting to WaitForSSH function...
	I0814 16:10:28.559008   21883 main.go:141] libmachine: (addons-521895) Reserved static IP address: 192.168.39.170
	I0814 16:10:28.559021   21883 main.go:141] libmachine: (addons-521895) Waiting for SSH to be available...
	I0814 16:10:28.561385   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.561823   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:28.561858   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.562054   21883 main.go:141] libmachine: (addons-521895) DBG | Using SSH client type: external
	I0814 16:10:28.562084   21883 main.go:141] libmachine: (addons-521895) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa (-rw-------)
	I0814 16:10:28.562128   21883 main.go:141] libmachine: (addons-521895) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 16:10:28.562145   21883 main.go:141] libmachine: (addons-521895) DBG | About to run SSH command:
	I0814 16:10:28.562160   21883 main.go:141] libmachine: (addons-521895) DBG | exit 0
	I0814 16:10:28.691223   21883 main.go:141] libmachine: (addons-521895) DBG | SSH cmd err, output: <nil>: 
	I0814 16:10:28.691524   21883 main.go:141] libmachine: (addons-521895) KVM machine creation complete!
	I0814 16:10:28.691862   21883 main.go:141] libmachine: (addons-521895) Calling .GetConfigRaw
	I0814 16:10:28.692374   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:28.692548   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:28.692689   21883 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 16:10:28.692700   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:28.694191   21883 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 16:10:28.694205   21883 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 16:10:28.694210   21883 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 16:10:28.694216   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:28.696636   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.697107   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:28.697131   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.697278   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:28.697438   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:28.697555   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:28.697701   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:28.697892   21883 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:28.698064   21883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0814 16:10:28.698078   21883 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 16:10:28.794350   21883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:10:28.794371   21883 main.go:141] libmachine: Detecting the provisioner...
	I0814 16:10:28.794379   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:28.796898   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.797259   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:28.797279   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.797414   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:28.797597   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:28.797746   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:28.797896   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:28.798063   21883 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:28.798236   21883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0814 16:10:28.798249   21883 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 16:10:28.895484   21883 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 16:10:28.895582   21883 main.go:141] libmachine: found compatible host: buildroot
	I0814 16:10:28.895598   21883 main.go:141] libmachine: Provisioning with buildroot...
	I0814 16:10:28.895608   21883 main.go:141] libmachine: (addons-521895) Calling .GetMachineName
	I0814 16:10:28.895883   21883 buildroot.go:166] provisioning hostname "addons-521895"
	I0814 16:10:28.895906   21883 main.go:141] libmachine: (addons-521895) Calling .GetMachineName
	I0814 16:10:28.896099   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:28.898660   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.899046   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:28.899062   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:28.899194   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:28.899373   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:28.899502   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:28.899626   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:28.899764   21883 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:28.899931   21883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0814 16:10:28.899944   21883 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-521895 && echo "addons-521895" | sudo tee /etc/hostname
	I0814 16:10:29.012661   21883 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-521895
	
	I0814 16:10:29.012691   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:29.015369   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.015724   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.015757   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.015848   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:29.016037   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.016205   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.016336   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:29.016497   21883 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:29.016666   21883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0814 16:10:29.016680   21883 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-521895' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-521895/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-521895' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 16:10:29.123805   21883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:10:29.123837   21883 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 16:10:29.123926   21883 buildroot.go:174] setting up certificates
	I0814 16:10:29.123944   21883 provision.go:84] configureAuth start
	I0814 16:10:29.123964   21883 main.go:141] libmachine: (addons-521895) Calling .GetMachineName
	I0814 16:10:29.124300   21883 main.go:141] libmachine: (addons-521895) Calling .GetIP
	I0814 16:10:29.127098   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.127615   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.127644   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.127840   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:29.130023   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.130326   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.130353   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.130502   21883 provision.go:143] copyHostCerts
	I0814 16:10:29.130655   21883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 16:10:29.130822   21883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 16:10:29.130920   21883 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 16:10:29.130995   21883 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.addons-521895 san=[127.0.0.1 192.168.39.170 addons-521895 localhost minikube]
	I0814 16:10:29.392495   21883 provision.go:177] copyRemoteCerts
	I0814 16:10:29.392547   21883 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 16:10:29.392568   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:29.394916   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.395267   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.395292   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.395450   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:29.395651   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.395788   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:29.395922   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:29.476686   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 16:10:29.498787   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 16:10:29.520616   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0814 16:10:29.543128   21883 provision.go:87] duration metric: took 419.159107ms to configureAuth
	I0814 16:10:29.543167   21883 buildroot.go:189] setting minikube options for container-runtime
	I0814 16:10:29.543361   21883 config.go:182] Loaded profile config "addons-521895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:10:29.543448   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:29.546123   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.546576   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.546602   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.546821   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:29.547012   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.547135   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.547291   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:29.547476   21883 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:29.547639   21883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0814 16:10:29.547658   21883 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 16:10:29.802009   21883 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 16:10:29.802042   21883 main.go:141] libmachine: Checking connection to Docker...
	I0814 16:10:29.802056   21883 main.go:141] libmachine: (addons-521895) Calling .GetURL
	I0814 16:10:29.803354   21883 main.go:141] libmachine: (addons-521895) DBG | Using libvirt version 6000000
	I0814 16:10:29.805409   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.805666   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.805690   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.805847   21883 main.go:141] libmachine: Docker is up and running!
	I0814 16:10:29.805869   21883 main.go:141] libmachine: Reticulating splines...
	I0814 16:10:29.805879   21883 client.go:171] duration metric: took 23.632061619s to LocalClient.Create
	I0814 16:10:29.805908   21883 start.go:167] duration metric: took 23.632142197s to libmachine.API.Create "addons-521895"
	I0814 16:10:29.805929   21883 start.go:293] postStartSetup for "addons-521895" (driver="kvm2")
	I0814 16:10:29.805942   21883 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 16:10:29.805963   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:29.806237   21883 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 16:10:29.806261   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:29.808336   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.808653   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.808679   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.808818   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:29.808991   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.809141   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:29.809279   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:29.889298   21883 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 16:10:29.893436   21883 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 16:10:29.893461   21883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 16:10:29.893521   21883 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 16:10:29.893549   21883 start.go:296] duration metric: took 87.611334ms for postStartSetup
	I0814 16:10:29.893578   21883 main.go:141] libmachine: (addons-521895) Calling .GetConfigRaw
	I0814 16:10:29.894081   21883 main.go:141] libmachine: (addons-521895) Calling .GetIP
	I0814 16:10:29.896884   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.897150   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.897178   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.897446   21883 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/config.json ...
	I0814 16:10:29.897619   21883 start.go:128] duration metric: took 23.741722706s to createHost
	I0814 16:10:29.897647   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:29.899839   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.900131   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:29.900176   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:29.900275   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:29.900448   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.900602   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:29.900715   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:29.900889   21883 main.go:141] libmachine: Using SSH client type: native
	I0814 16:10:29.901114   21883 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0814 16:10:29.901129   21883 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 16:10:29.999778   21883 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723651829.977425271
	
	I0814 16:10:29.999799   21883 fix.go:216] guest clock: 1723651829.977425271
	I0814 16:10:29.999807   21883 fix.go:229] Guest: 2024-08-14 16:10:29.977425271 +0000 UTC Remote: 2024-08-14 16:10:29.89763113 +0000 UTC m=+23.840249664 (delta=79.794141ms)
	I0814 16:10:29.999826   21883 fix.go:200] guest clock delta is within tolerance: 79.794141ms
	I0814 16:10:29.999831   21883 start.go:83] releasing machines lock for "addons-521895", held for 23.844024817s
	I0814 16:10:29.999849   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:30.000127   21883 main.go:141] libmachine: (addons-521895) Calling .GetIP
	I0814 16:10:30.002906   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:30.003230   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:30.003261   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:30.003381   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:30.003954   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:30.004196   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:30.004266   21883 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 16:10:30.004312   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:30.004428   21883 ssh_runner.go:195] Run: cat /version.json
	I0814 16:10:30.004447   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:30.007808   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:30.007966   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:30.008197   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:30.008220   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:30.008408   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:30.008534   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:30.008561   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:30.008571   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:30.008693   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:30.008799   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:30.008872   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:30.009030   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:30.009052   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:30.009195   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:30.120216   21883 ssh_runner.go:195] Run: systemctl --version
	I0814 16:10:30.125753   21883 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 16:10:30.280905   21883 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 16:10:30.286714   21883 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 16:10:30.286775   21883 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:10:30.302067   21883 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 16:10:30.302090   21883 start.go:495] detecting cgroup driver to use...
	I0814 16:10:30.302142   21883 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 16:10:30.317711   21883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 16:10:30.330349   21883 docker.go:217] disabling cri-docker service (if available) ...
	I0814 16:10:30.330394   21883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 16:10:30.343081   21883 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 16:10:30.355658   21883 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 16:10:30.466030   21883 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 16:10:30.628663   21883 docker.go:233] disabling docker service ...
	I0814 16:10:30.628743   21883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 16:10:30.642641   21883 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 16:10:30.654798   21883 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 16:10:30.760830   21883 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 16:10:30.869341   21883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 16:10:30.883051   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 16:10:30.900734   21883 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 16:10:30.900790   21883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.910788   21883 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 16:10:30.910846   21883 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.921453   21883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.931882   21883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.941563   21883 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 16:10:30.951596   21883 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.961498   21883 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.977531   21883 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:10:30.987513   21883 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 16:10:30.996778   21883 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 16:10:30.996837   21883 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 16:10:31.009574   21883 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 16:10:31.018737   21883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:10:31.131242   21883 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 16:10:31.265302   21883 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 16:10:31.265418   21883 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 16:10:31.269437   21883 start.go:563] Will wait 60s for crictl version
	I0814 16:10:31.269505   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:10:31.272777   21883 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 16:10:31.309296   21883 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 16:10:31.309426   21883 ssh_runner.go:195] Run: crio --version
	I0814 16:10:31.338286   21883 ssh_runner.go:195] Run: crio --version
	I0814 16:10:31.364654   21883 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 16:10:31.365724   21883 main.go:141] libmachine: (addons-521895) Calling .GetIP
	I0814 16:10:31.368040   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:31.368513   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:31.368539   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:31.368794   21883 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 16:10:31.372348   21883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:10:31.383506   21883 kubeadm.go:883] updating cluster {Name:addons-521895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-521895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 16:10:31.383598   21883 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:10:31.383687   21883 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:10:31.412955   21883 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 16:10:31.413014   21883 ssh_runner.go:195] Run: which lz4
	I0814 16:10:31.416518   21883 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 16:10:31.420240   21883 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 16:10:31.420268   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 16:10:32.533117   21883 crio.go:462] duration metric: took 1.11663254s to copy over tarball
	I0814 16:10:32.533201   21883 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 16:10:34.622945   21883 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.089710145s)
	I0814 16:10:34.622979   21883 crio.go:469] duration metric: took 2.089833263s to extract the tarball
	I0814 16:10:34.622987   21883 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 16:10:34.659111   21883 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:10:34.712778   21883 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 16:10:34.712802   21883 cache_images.go:84] Images are preloaded, skipping loading
	I0814 16:10:34.712810   21883 kubeadm.go:934] updating node { 192.168.39.170 8443 v1.31.0 crio true true} ...
	I0814 16:10:34.712902   21883 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-521895 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-521895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 16:10:34.712964   21883 ssh_runner.go:195] Run: crio config
	I0814 16:10:34.764521   21883 cni.go:84] Creating CNI manager for ""
	I0814 16:10:34.764539   21883 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 16:10:34.764550   21883 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 16:10:34.764570   21883 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-521895 NodeName:addons-521895 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 16:10:34.764704   21883 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-521895"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 16:10:34.764758   21883 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 16:10:34.774903   21883 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 16:10:34.774960   21883 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 16:10:34.784387   21883 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0814 16:10:34.799459   21883 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 16:10:34.814188   21883 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0814 16:10:34.829136   21883 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I0814 16:10:34.832558   21883 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:10:34.843692   21883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:10:34.962207   21883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:10:34.977962   21883 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895 for IP: 192.168.39.170
	I0814 16:10:34.977985   21883 certs.go:194] generating shared ca certs ...
	I0814 16:10:34.978000   21883 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:34.978138   21883 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 16:10:35.198673   21883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt ...
	I0814 16:10:35.198703   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt: {Name:mk62824be8e10bd263c0dd5720a3117b18ac9879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.198915   21883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key ...
	I0814 16:10:35.198931   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key: {Name:mk574395626194e124be99961a17bf1bc61653b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.199059   21883 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 16:10:35.305488   21883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt ...
	I0814 16:10:35.305518   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt: {Name:mk3bb83a0fb2ed49a81ef6a63fce51ca58051613 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.305702   21883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key ...
	I0814 16:10:35.305717   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key: {Name:mk32534c9350755c75499694cb013600e4c1ce82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.305811   21883 certs.go:256] generating profile certs ...
	I0814 16:10:35.305876   21883 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.key
	I0814 16:10:35.305895   21883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt with IP's: []
	I0814 16:10:35.442627   21883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt ...
	I0814 16:10:35.442657   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: {Name:mkbab4ed7e6d5971126674d442590fd6728b9eec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.442837   21883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.key ...
	I0814 16:10:35.442851   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.key: {Name:mk1782a3916e9e3308a4f8c0920aef28bba5d828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.442977   21883 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.key.65557067
	I0814 16:10:35.442999   21883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.crt.65557067 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170]
	I0814 16:10:35.633944   21883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.crt.65557067 ...
	I0814 16:10:35.633973   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.crt.65557067: {Name:mk9c9d65275d11733d48a9bb792c3edff9dbb01c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.634140   21883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.key.65557067 ...
	I0814 16:10:35.634157   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.key.65557067: {Name:mka72a6e2cc95c34d1b74708936e1ed30a52196a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.634250   21883 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.crt.65557067 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.crt
	I0814 16:10:35.634341   21883 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.key.65557067 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.key
	I0814 16:10:35.634421   21883 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.key
	I0814 16:10:35.634446   21883 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.crt with IP's: []
	I0814 16:10:35.804439   21883 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.crt ...
	I0814 16:10:35.804465   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.crt: {Name:mk9edbd5c2ee2861498ab8a21bdc910e43daaa9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.804622   21883 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.key ...
	I0814 16:10:35.804633   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.key: {Name:mk495301dade9c4e996c4c2a8a360d9a8e9b4707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:35.804786   21883 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 16:10:35.804817   21883 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 16:10:35.804840   21883 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 16:10:35.804865   21883 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 16:10:35.805386   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 16:10:35.828315   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 16:10:35.849948   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 16:10:35.870935   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 16:10:35.891433   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0814 16:10:35.912401   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 16:10:35.933755   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 16:10:35.955710   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 16:10:35.977404   21883 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 16:10:35.998289   21883 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 16:10:36.013514   21883 ssh_runner.go:195] Run: openssl version
	I0814 16:10:36.018855   21883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 16:10:36.028583   21883 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:10:36.032680   21883 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:10:36.032726   21883 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:10:36.038176   21883 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 16:10:36.047985   21883 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 16:10:36.051527   21883 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 16:10:36.051585   21883 kubeadm.go:392] StartCluster: {Name:addons-521895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-521895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:10:36.051676   21883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 16:10:36.051717   21883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 16:10:36.091799   21883 cri.go:89] found id: ""
	I0814 16:10:36.091878   21883 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 16:10:36.101280   21883 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 16:10:36.110142   21883 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 16:10:36.118798   21883 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 16:10:36.118812   21883 kubeadm.go:157] found existing configuration files:
	
	I0814 16:10:36.118853   21883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 16:10:36.127079   21883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 16:10:36.127124   21883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 16:10:36.135493   21883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 16:10:36.143457   21883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 16:10:36.143497   21883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 16:10:36.151866   21883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 16:10:36.160052   21883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 16:10:36.160088   21883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 16:10:36.168543   21883 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 16:10:36.176858   21883 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 16:10:36.176916   21883 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 16:10:36.185846   21883 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 16:10:36.240444   21883 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 16:10:36.240571   21883 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 16:10:36.339722   21883 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 16:10:36.339836   21883 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 16:10:36.339923   21883 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 16:10:36.350867   21883 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 16:10:36.443055   21883 out.go:204]   - Generating certificates and keys ...
	I0814 16:10:36.443181   21883 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 16:10:36.443276   21883 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 16:10:36.547559   21883 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 16:10:36.657343   21883 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 16:10:36.740110   21883 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 16:10:37.022509   21883 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 16:10:37.246483   21883 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 16:10:37.246671   21883 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-521895 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0814 16:10:37.323623   21883 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 16:10:37.323768   21883 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-521895 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0814 16:10:37.568386   21883 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 16:10:37.679115   21883 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 16:10:37.783745   21883 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 16:10:37.783817   21883 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 16:10:37.867783   21883 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 16:10:38.225454   21883 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 16:10:38.362436   21883 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 16:10:38.537998   21883 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 16:10:38.658136   21883 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 16:10:38.658641   21883 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 16:10:38.661071   21883 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 16:10:38.663034   21883 out.go:204]   - Booting up control plane ...
	I0814 16:10:38.663144   21883 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 16:10:38.663227   21883 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 16:10:38.663301   21883 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 16:10:38.681360   21883 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 16:10:38.688368   21883 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 16:10:38.688439   21883 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 16:10:38.815510   21883 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 16:10:38.815656   21883 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 16:10:39.816545   21883 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001739927s
	I0814 16:10:39.816650   21883 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 16:10:44.815025   21883 kubeadm.go:310] [api-check] The API server is healthy after 5.001358779s
	I0814 16:10:44.827551   21883 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 16:10:44.845146   21883 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 16:10:44.874557   21883 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 16:10:44.874750   21883 kubeadm.go:310] [mark-control-plane] Marking the node addons-521895 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 16:10:44.889264   21883 kubeadm.go:310] [bootstrap-token] Using token: vwipfe.56fv3zfcv1u9rrs2
	I0814 16:10:44.890619   21883 out.go:204]   - Configuring RBAC rules ...
	I0814 16:10:44.890770   21883 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 16:10:44.897257   21883 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 16:10:44.912935   21883 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 16:10:44.917697   21883 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 16:10:44.924877   21883 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 16:10:44.929196   21883 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 16:10:45.223610   21883 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 16:10:45.713421   21883 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 16:10:46.221131   21883 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 16:10:46.221918   21883 kubeadm.go:310] 
	I0814 16:10:46.222019   21883 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 16:10:46.222055   21883 kubeadm.go:310] 
	I0814 16:10:46.222147   21883 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 16:10:46.222162   21883 kubeadm.go:310] 
	I0814 16:10:46.222198   21883 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 16:10:46.222278   21883 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 16:10:46.222367   21883 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 16:10:46.222377   21883 kubeadm.go:310] 
	I0814 16:10:46.222440   21883 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 16:10:46.222450   21883 kubeadm.go:310] 
	I0814 16:10:46.222516   21883 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 16:10:46.222526   21883 kubeadm.go:310] 
	I0814 16:10:46.222604   21883 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 16:10:46.222739   21883 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 16:10:46.222834   21883 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 16:10:46.222842   21883 kubeadm.go:310] 
	I0814 16:10:46.222946   21883 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 16:10:46.223042   21883 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 16:10:46.223068   21883 kubeadm.go:310] 
	I0814 16:10:46.223176   21883 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vwipfe.56fv3zfcv1u9rrs2 \
	I0814 16:10:46.223303   21883 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 16:10:46.223354   21883 kubeadm.go:310] 	--control-plane 
	I0814 16:10:46.223364   21883 kubeadm.go:310] 
	I0814 16:10:46.223466   21883 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 16:10:46.223475   21883 kubeadm.go:310] 
	I0814 16:10:46.223590   21883 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vwipfe.56fv3zfcv1u9rrs2 \
	I0814 16:10:46.223744   21883 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 16:10:46.224384   21883 kubeadm.go:310] W0814 16:10:36.221186     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 16:10:46.224764   21883 kubeadm.go:310] W0814 16:10:36.222383     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 16:10:46.224862   21883 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 16:10:46.224885   21883 cni.go:84] Creating CNI manager for ""
	I0814 16:10:46.224894   21883 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 16:10:46.226761   21883 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 16:10:46.228163   21883 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 16:10:46.240120   21883 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 16:10:46.257403   21883 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 16:10:46.257490   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:46.257490   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-521895 minikube.k8s.io/updated_at=2024_08_14T16_10_46_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=addons-521895 minikube.k8s.io/primary=true
	I0814 16:10:46.393678   21883 ops.go:34] apiserver oom_adj: -16
	I0814 16:10:46.393717   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:46.893842   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:47.394570   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:47.894347   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:48.394498   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:48.894638   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:49.393849   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:49.894009   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:50.394030   21883 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:10:50.483045   21883 kubeadm.go:1113] duration metric: took 4.225619558s to wait for elevateKubeSystemPrivileges
	I0814 16:10:50.483081   21883 kubeadm.go:394] duration metric: took 14.431497273s to StartCluster
	I0814 16:10:50.483103   21883 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:50.483264   21883 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:10:50.483795   21883 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:10:50.484004   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 16:10:50.484043   21883 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:10:50.484092   21883 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0814 16:10:50.484188   21883 addons.go:69] Setting yakd=true in profile "addons-521895"
	I0814 16:10:50.484201   21883 addons.go:69] Setting helm-tiller=true in profile "addons-521895"
	I0814 16:10:50.484211   21883 addons.go:69] Setting gcp-auth=true in profile "addons-521895"
	I0814 16:10:50.484194   21883 addons.go:69] Setting inspektor-gadget=true in profile "addons-521895"
	I0814 16:10:50.484231   21883 addons.go:69] Setting ingress=true in profile "addons-521895"
	I0814 16:10:50.484240   21883 addons.go:234] Setting addon inspektor-gadget=true in "addons-521895"
	I0814 16:10:50.484244   21883 mustload.go:65] Loading cluster: addons-521895
	I0814 16:10:50.484247   21883 addons.go:234] Setting addon ingress=true in "addons-521895"
	I0814 16:10:50.484247   21883 addons.go:69] Setting volcano=true in profile "addons-521895"
	I0814 16:10:50.484248   21883 addons.go:69] Setting storage-provisioner=true in profile "addons-521895"
	I0814 16:10:50.484260   21883 config.go:182] Loaded profile config "addons-521895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:10:50.484270   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484274   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484288   21883 addons.go:234] Setting addon volcano=true in "addons-521895"
	I0814 16:10:50.484305   21883 addons.go:234] Setting addon storage-provisioner=true in "addons-521895"
	I0814 16:10:50.484306   21883 addons.go:69] Setting ingress-dns=true in profile "addons-521895"
	I0814 16:10:50.484321   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484337   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484350   21883 addons.go:234] Setting addon ingress-dns=true in "addons-521895"
	I0814 16:10:50.484384   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484223   21883 addons.go:234] Setting addon helm-tiller=true in "addons-521895"
	I0814 16:10:50.484425   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484448   21883 config.go:182] Loaded profile config "addons-521895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:10:50.484733   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.484752   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.484754   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.484755   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.484762   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.484772   21883 addons.go:69] Setting metrics-server=true in profile "addons-521895"
	I0814 16:10:50.484781   21883 addons.go:69] Setting cloud-spanner=true in profile "addons-521895"
	I0814 16:10:50.484786   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.484794   21883 addons.go:234] Setting addon metrics-server=true in "addons-521895"
	I0814 16:10:50.484800   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.484804   21883 addons.go:234] Setting addon cloud-spanner=true in "addons-521895"
	I0814 16:10:50.484813   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.484814   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.484821   21883 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-521895"
	I0814 16:10:50.484837   21883 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-521895"
	I0814 16:10:50.484851   21883 addons.go:69] Setting volumesnapshots=true in profile "addons-521895"
	I0814 16:10:50.484740   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.484857   21883 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-521895"
	I0814 16:10:50.484857   21883 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-521895"
	I0814 16:10:50.484867   21883 addons.go:69] Setting default-storageclass=true in profile "addons-521895"
	I0814 16:10:50.484871   21883 addons.go:234] Setting addon volumesnapshots=true in "addons-521895"
	I0814 16:10:50.484223   21883 addons.go:234] Setting addon yakd=true in "addons-521895"
	I0814 16:10:50.484872   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.484883   21883 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-521895"
	I0814 16:10:50.484883   21883 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-521895"
	I0814 16:10:50.484893   21883 addons.go:69] Setting registry=true in profile "addons-521895"
	I0814 16:10:50.484903   21883 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-521895"
	I0814 16:10:50.484914   21883 addons.go:234] Setting addon registry=true in "addons-521895"
	I0814 16:10:50.484775   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.484998   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.485059   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.485145   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485188   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.485260   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.485468   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485488   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.485571   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485632   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.485641   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.485633   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.485695   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485707   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.485723   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.485856   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485868   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.485983   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485996   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.485999   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.486021   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.486054   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.486083   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.486169   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.486532   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.486549   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.486788   21883 out.go:177] * Verifying Kubernetes components...
	I0814 16:10:50.492597   21883 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:10:50.507457   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42397
	I0814 16:10:50.507473   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0814 16:10:50.507689   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44811
	I0814 16:10:50.507955   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.509148   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.509178   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.509727   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.509790   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.510105   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.510310   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0814 16:10:50.510885   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.510920   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.511008   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.511464   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.511487   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.511541   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.511824   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.511828   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.512386   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.512435   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.512829   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.512848   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.513186   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.513218   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.514855   21883 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-521895"
	I0814 16:10:50.514898   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.515257   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.515305   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.515499   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.519922   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I0814 16:10:50.520432   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.520447   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.520474   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.520493   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.528014   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.528206   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0814 16:10:50.528327   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34683
	I0814 16:10:50.528432   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44889
	I0814 16:10:50.543910   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.550974   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.551016   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46099
	I0814 16:10:50.551108   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.551130   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.551148   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.551846   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.551865   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.551947   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.552633   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.552671   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.553027   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.553086   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.553102   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.553142   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43657
	I0814 16:10:50.553331   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.553822   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.553861   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.554438   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35569
	I0814 16:10:50.554517   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.554582   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.554593   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.554568   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.555019   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.555111   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.555141   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.555403   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.555423   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.555795   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.555839   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.555920   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.555942   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.556059   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.556411   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.556435   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.556546   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.557982   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.558309   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.558384   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.558938   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.558978   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.559615   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.560037   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.560726   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.560766   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.560981   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.561365   21883 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0814 16:10:50.561413   21883 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 16:10:50.562833   21883 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0814 16:10:50.562861   21883 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0814 16:10:50.562871   21883 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0814 16:10:50.563255   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.572515   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.572669   21883 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 16:10:50.572680   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 16:10:50.572698   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.572779   21883 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0814 16:10:50.572787   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0814 16:10:50.572798   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.572551   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.572843   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.572917   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45805
	I0814 16:10:50.573026   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43051
	I0814 16:10:50.573152   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.573422   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.574217   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.574313   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.574859   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.574155   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.574974   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.575499   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.575926   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.576795   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.576835   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.576977   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.577357   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.577395   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.577411   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.577443   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.577654   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.577879   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.577945   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.577961   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.578150   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.578177   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.578378   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.578668   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.578846   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.579412   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.579436   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.579586   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0814 16:10:50.579821   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.580035   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.580384   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.580431   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.580436   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.580450   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.580799   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0814 16:10:50.580907   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.581393   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.581426   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.581628   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I0814 16:10:50.581639   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.582089   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.582103   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.584259   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43275
	I0814 16:10:50.584290   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39769
	I0814 16:10:50.584382   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.584413   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.584939   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.584978   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.585242   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.585254   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.585318   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.585383   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.585582   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.585860   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.585877   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.586000   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.586010   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.586408   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.586436   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.586616   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.587410   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.587634   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.587664   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.587971   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.588007   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.593815   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I0814 16:10:50.594354   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.594977   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.594994   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.595480   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.595720   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.597171   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45315
	I0814 16:10:50.597574   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.597676   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.597739   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41671
	I0814 16:10:50.598374   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.598399   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.598527   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.598742   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.599179   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.599196   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.599221   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.599613   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.599770   21883 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0814 16:10:50.600228   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.600268   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.601179   21883 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0814 16:10:50.601196   21883 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0814 16:10:50.601213   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.601440   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43321
	I0814 16:10:50.601985   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.602767   21883 addons.go:234] Setting addon default-storageclass=true in "addons-521895"
	I0814 16:10:50.602808   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:50.603178   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.603195   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.603227   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.603270   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.603530   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.603698   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.604137   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.604745   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.604777   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.604936   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.605091   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.605219   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.605336   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.612706   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.614440   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0814 16:10:50.614902   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.615495   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.615520   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.615729   21883 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0814 16:10:50.616001   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.616218   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.617077   21883 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0814 16:10:50.617101   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0814 16:10:50.617120   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.620338   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.620758   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44969
	I0814 16:10:50.621049   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.621193   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.621932   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.621972   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.621992   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.622108   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.622339   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.622362   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.622386   21883 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0814 16:10:50.622590   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.622739   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.623561   21883 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 16:10:50.623578   21883 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 16:10:50.623594   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.623602   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.624157   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.626613   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.627272   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.627411   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0814 16:10:50.627521   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.627537   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.627617   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.628213   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.628316   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.628362   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I0814 16:10:50.628915   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.628935   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.629289   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.629466   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.629636   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.629804   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.629979   21883 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0814 16:10:50.630278   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39505
	I0814 16:10:50.630409   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.630924   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.630944   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.631001   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.631001   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.631253   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
	I0814 16:10:50.631750   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.631769   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.632122   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.632208   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.632360   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.632420   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.632547   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.632971   21883 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 16:10:50.633097   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.633113   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.632972   21883 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0814 16:10:50.633686   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.634327   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.635294   21883 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 16:10:50.635402   21883 out.go:177]   - Using image docker.io/busybox:stable
	I0814 16:10:50.635606   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:50.635645   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:50.635883   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0814 16:10:50.636293   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.636546   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0814 16:10:50.636727   21883 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0814 16:10:50.636747   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0814 16:10:50.636763   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.637006   21883 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0814 16:10:50.637020   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0814 16:10:50.637037   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.637020   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46343
	I0814 16:10:50.637895   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.638390   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.638405   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.638500   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.638519   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.638970   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.639011   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34627
	I0814 16:10:50.639251   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.639281   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.639580   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.639653   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.639960   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0814 16:10:50.640200   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.640222   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.640552   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.640732   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.641355   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.641889   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0814 16:10:50.642132   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.642157   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.642421   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.642604   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.642838   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.643062   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.643558   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.644041   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0814 16:10:50.644211   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.644467   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.644657   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.644676   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.644919   21883 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0814 16:10:50.644962   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.645017   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.645134   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.645169   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I0814 16:10:50.645819   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0814 16:10:50.645907   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:50.645916   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:50.646075   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:50.646080   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.646087   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:50.646105   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:50.646112   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:50.646195   21883 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0814 16:10:50.646209   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0814 16:10:50.646226   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.646295   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:50.646318   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:50.646326   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	W0814 16:10:50.646394   21883 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0814 16:10:50.646505   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.646832   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0814 16:10:50.647378   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.647926   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.647938   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.648088   21883 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0814 16:10:50.648508   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.649000   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.649086   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.649108   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.649129   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0814 16:10:50.649279   21883 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0814 16:10:50.649289   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0814 16:10:50.649301   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.649477   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.650143   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.650945   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.651285   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.651351   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0814 16:10:50.651698   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.651782   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.651997   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.652174   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.652327   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.652462   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.652543   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.653163   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.653566   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.653826   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0814 16:10:50.653831   21883 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0814 16:10:50.653938   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.654098   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.654169   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.654322   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.654428   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.654506   21883 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0814 16:10:50.654570   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.655198   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0814 16:10:50.655205   21883 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0814 16:10:50.655214   21883 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0814 16:10:50.655215   21883 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0814 16:10:50.655229   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.655229   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.657089   21883 out.go:177]   - Using image docker.io/registry:2.8.3
	I0814 16:10:50.658199   21883 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0814 16:10:50.658218   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0814 16:10:50.658231   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.658234   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:50.659298   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.659354   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.659494   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.659681   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.659868   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.660017   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.660439   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.661801   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.661801   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.661832   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.661860   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.662016   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.662312   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.662328   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.662353   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.662512   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.662524   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.662857   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.662988   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.663104   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:50.665645   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40161
	I0814 16:10:50.665973   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:50.666359   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:50.666370   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:50.666722   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:50.666885   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:50.668366   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:50.668555   21883 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 16:10:50.668567   21883 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 16:10:50.668578   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	W0814 16:10:50.669633   21883 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34528->192.168.39.170:22: read: connection reset by peer
	I0814 16:10:50.669653   21883 retry.go:31] will retry after 164.2543ms: ssh: handshake failed: read tcp 192.168.39.1:34528->192.168.39.170:22: read: connection reset by peer
	W0814 16:10:50.669716   21883 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34538->192.168.39.170:22: read: connection reset by peer
	I0814 16:10:50.669726   21883 retry.go:31] will retry after 252.601659ms: ssh: handshake failed: read tcp 192.168.39.1:34538->192.168.39.170:22: read: connection reset by peer
	I0814 16:10:50.670963   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.671292   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:50.671309   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:50.671519   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:50.671674   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:50.671883   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:50.672013   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	W0814 16:10:50.672492   21883 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34542->192.168.39.170:22: read: connection reset by peer
	I0814 16:10:50.672507   21883 retry.go:31] will retry after 232.561584ms: ssh: handshake failed: read tcp 192.168.39.1:34542->192.168.39.170:22: read: connection reset by peer
	W0814 16:10:50.834593   21883 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:34546->192.168.39.170:22: read: connection reset by peer
	I0814 16:10:50.834622   21883 retry.go:31] will retry after 229.630872ms: ssh: handshake failed: read tcp 192.168.39.1:34546->192.168.39.170:22: read: connection reset by peer
	I0814 16:10:50.949629   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 16:10:51.027681   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0814 16:10:51.062834   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0814 16:10:51.071717   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0814 16:10:51.074511   21883 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 16:10:51.074534   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0814 16:10:51.076116   21883 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0814 16:10:51.076139   21883 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0814 16:10:51.081546   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0814 16:10:51.083581   21883 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0814 16:10:51.083599   21883 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0814 16:10:51.114075   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0814 16:10:51.131524   21883 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0814 16:10:51.131554   21883 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0814 16:10:51.135778   21883 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0814 16:10:51.135797   21883 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0814 16:10:51.151994   21883 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:10:51.152070   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 16:10:51.270655   21883 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0814 16:10:51.270684   21883 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0814 16:10:51.276223   21883 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 16:10:51.276243   21883 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 16:10:51.310888   21883 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0814 16:10:51.310916   21883 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0814 16:10:51.311269   21883 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0814 16:10:51.311289   21883 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0814 16:10:51.343122   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 16:10:51.347256   21883 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0814 16:10:51.347285   21883 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0814 16:10:51.486680   21883 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0814 16:10:51.486713   21883 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0814 16:10:51.535277   21883 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0814 16:10:51.535310   21883 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0814 16:10:51.565416   21883 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0814 16:10:51.565450   21883 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0814 16:10:51.567437   21883 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0814 16:10:51.567458   21883 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0814 16:10:51.575297   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0814 16:10:51.594357   21883 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 16:10:51.594390   21883 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 16:10:51.661706   21883 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0814 16:10:51.661730   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0814 16:10:51.710957   21883 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0814 16:10:51.710985   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0814 16:10:51.721696   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0814 16:10:51.721725   21883 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0814 16:10:51.763273   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0814 16:10:51.763302   21883 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0814 16:10:51.795776   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 16:10:51.823344   21883 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0814 16:10:51.823373   21883 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0814 16:10:51.852470   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0814 16:10:51.869448   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0814 16:10:51.891244   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0814 16:10:51.891278   21883 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0814 16:10:51.953475   21883 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 16:10:51.953506   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0814 16:10:52.007299   21883 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0814 16:10:52.007355   21883 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0814 16:10:52.230751   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0814 16:10:52.230778   21883 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0814 16:10:52.268601   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 16:10:52.349164   21883 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0814 16:10:52.349194   21883 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0814 16:10:52.514968   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0814 16:10:52.514998   21883 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0814 16:10:52.652787   21883 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0814 16:10:52.652817   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0814 16:10:52.764861   21883 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0814 16:10:52.764891   21883 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0814 16:10:52.910399   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0814 16:10:53.122923   21883 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0814 16:10:53.122945   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0814 16:10:53.351159   21883 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0814 16:10:53.351182   21883 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0814 16:10:53.648965   21883 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0814 16:10:53.648992   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0814 16:10:53.914056   21883 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0814 16:10:53.914075   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0814 16:10:54.227644   21883 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0814 16:10:54.227675   21883 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0814 16:10:54.677927   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0814 16:10:55.027482   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.077816028s)
	I0814 16:10:55.027522   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.999815031s)
	I0814 16:10:55.027539   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:55.027541   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:55.027552   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:55.027553   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:55.027952   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:55.027970   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:55.027980   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:55.027988   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:55.028060   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:55.028081   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:55.028091   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:55.028100   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:55.028107   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:55.028191   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:55.028206   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:55.028224   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:55.028319   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:55.028346   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:55.028385   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:57.649200   21883 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0814 16:10:57.649236   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:57.652736   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:57.653364   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:57.653399   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:57.653614   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:57.653842   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:57.654042   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:57.654204   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:58.256857   21883 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0814 16:10:58.317832   21883 addons.go:234] Setting addon gcp-auth=true in "addons-521895"
	I0814 16:10:58.317884   21883 host.go:66] Checking if "addons-521895" exists ...
	I0814 16:10:58.318205   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:58.318232   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:58.333959   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46191
	I0814 16:10:58.334434   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:58.334926   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:58.334952   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:58.335376   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:58.335935   21883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:10:58.335968   21883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:10:58.351420   21883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0814 16:10:58.351818   21883 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:10:58.352273   21883 main.go:141] libmachine: Using API Version  1
	I0814 16:10:58.352303   21883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:10:58.352583   21883 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:10:58.352752   21883 main.go:141] libmachine: (addons-521895) Calling .GetState
	I0814 16:10:58.354278   21883 main.go:141] libmachine: (addons-521895) Calling .DriverName
	I0814 16:10:58.354519   21883 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0814 16:10:58.354545   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHHostname
	I0814 16:10:58.357637   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:58.358080   21883 main.go:141] libmachine: (addons-521895) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:83:8f", ip: ""} in network mk-addons-521895: {Iface:virbr1 ExpiryTime:2024-08-14 17:10:20 +0000 UTC Type:0 Mac:52:54:00:8a:83:8f Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:addons-521895 Clientid:01:52:54:00:8a:83:8f}
	I0814 16:10:58.358106   21883 main.go:141] libmachine: (addons-521895) DBG | domain addons-521895 has defined IP address 192.168.39.170 and MAC address 52:54:00:8a:83:8f in network mk-addons-521895
	I0814 16:10:58.358228   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHPort
	I0814 16:10:58.358409   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHKeyPath
	I0814 16:10:58.358548   21883 main.go:141] libmachine: (addons-521895) Calling .GetSSHUsername
	I0814 16:10:58.358709   21883 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/addons-521895/id_rsa Username:docker}
	I0814 16:10:59.213723   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.150858257s)
	I0814 16:10:59.213775   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.213786   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.213788   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.142041941s)
	I0814 16:10:59.213827   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.213841   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.213860   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.132290307s)
	I0814 16:10:59.213896   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.213912   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.213931   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.099827462s)
	I0814 16:10:59.213973   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.214056   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.214179   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.418374051s)
	I0814 16:10:59.214205   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.214215   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.214351   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.361819374s)
	I0814 16:10:59.214367   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.214376   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.214435   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.344954677s)
	I0814 16:10:59.213982   21883 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.061965062s)
	I0814 16:10:59.214449   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.214457   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.213994   21883 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.061901768s)
	I0814 16:10:59.214824   21883 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0814 16:10:59.215399   21883 node_ready.go:35] waiting up to 6m0s for node "addons-521895" to be "Ready" ...
	I0814 16:10:59.214070   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.63875231s)
	I0814 16:10:59.215649   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.215660   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.215711   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.215713   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.215728   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.215737   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.215743   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.215745   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.215751   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.215761   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.215768   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.214035   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.870866076s)
	I0814 16:10:59.215991   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216008   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.216028   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.305596942s)
	I0814 16:10:59.215793   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.215807   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216051   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216060   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.215814   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.215832   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216111   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216120   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216128   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.215834   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216157   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216167   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216174   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.215854   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216281   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216290   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216297   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.215962   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.947318144s)
	W0814 16:10:59.216456   21883 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0814 16:10:59.216478   21883 retry.go:31] will retry after 156.382504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0814 16:10:59.216600   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216604   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216606   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216620   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216634   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216637   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216642   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216645   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216651   21883 addons.go:475] Verifying addon metrics-server=true in "addons-521895"
	I0814 16:10:59.216654   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216652   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216664   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.216678   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216686   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216693   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.216700   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.216708   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216714   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216638   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216722   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216716   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216694   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216729   21883 addons.go:475] Verifying addon ingress=true in "addons-521895"
	I0814 16:10:59.216749   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216897   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.216925   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.216932   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.216940   21883 addons.go:475] Verifying addon registry=true in "addons-521895"
	I0814 16:10:59.216982   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.217013   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.217441   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.217455   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.217811   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.217835   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.218021   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.218034   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.218044   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.216740   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.218087   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.217851   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.217941   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.218153   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.218162   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.218172   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.218435   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.218449   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.218579   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.218625   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.218642   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.218659   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.218665   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.218670   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.219656   21883 out.go:177] * Verifying registry addon...
	I0814 16:10:59.219716   21883 out.go:177] * Verifying ingress addon...
	I0814 16:10:59.220405   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.220469   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.221006   21883 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-521895 service yakd-dashboard -n yakd-dashboard
	
	I0814 16:10:59.221772   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.221783   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.222651   21883 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0814 16:10:59.222651   21883 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0814 16:10:59.233481   21883 node_ready.go:49] node "addons-521895" has status "Ready":"True"
	I0814 16:10:59.233508   21883 node_ready.go:38] duration metric: took 18.091206ms for node "addons-521895" to be "Ready" ...
	I0814 16:10:59.233521   21883 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:10:59.270255   21883 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0814 16:10:59.270286   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:59.272344   21883 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-7rf58" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.280084   21883 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0814 16:10:59.280123   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:10:59.323126   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.323171   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.323513   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.323532   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:10:59.323562   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.334079   21883 pod_ready.go:92] pod "coredns-6f6b679f8f-7rf58" in "kube-system" namespace has status "Ready":"True"
	I0814 16:10:59.334099   21883 pod_ready.go:81] duration metric: took 61.72445ms for pod "coredns-6f6b679f8f-7rf58" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.334112   21883 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-kdsjh" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.343052   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:10:59.343076   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:10:59.343356   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:10:59.343402   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:10:59.343414   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	W0814 16:10:59.343498   21883 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I0814 16:10:59.365587   21883 pod_ready.go:92] pod "coredns-6f6b679f8f-kdsjh" in "kube-system" namespace has status "Ready":"True"
	I0814 16:10:59.365621   21883 pod_ready.go:81] duration metric: took 31.500147ms for pod "coredns-6f6b679f8f-kdsjh" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.365637   21883 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.373460   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0814 16:10:59.411487   21883 pod_ready.go:92] pod "etcd-addons-521895" in "kube-system" namespace has status "Ready":"True"
	I0814 16:10:59.411516   21883 pod_ready.go:81] duration metric: took 45.869605ms for pod "etcd-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.411531   21883 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.438374   21883 pod_ready.go:92] pod "kube-apiserver-addons-521895" in "kube-system" namespace has status "Ready":"True"
	I0814 16:10:59.438410   21883 pod_ready.go:81] duration metric: took 26.870151ms for pod "kube-apiserver-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.438424   21883 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.618670   21883 pod_ready.go:92] pod "kube-controller-manager-addons-521895" in "kube-system" namespace has status "Ready":"True"
	I0814 16:10:59.618699   21883 pod_ready.go:81] duration metric: took 180.265352ms for pod "kube-controller-manager-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.618715   21883 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-djhvc" in "kube-system" namespace to be "Ready" ...
	I0814 16:10:59.718829   21883 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-521895" context rescaled to 1 replicas
	I0814 16:10:59.727475   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:10:59.728010   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:00.204164   21883 pod_ready.go:92] pod "kube-proxy-djhvc" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:00.204187   21883 pod_ready.go:81] duration metric: took 585.463961ms for pod "kube-proxy-djhvc" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:00.204197   21883 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:00.228296   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:00.229033   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:00.421525   21883 pod_ready.go:92] pod "kube-scheduler-addons-521895" in "kube-system" namespace has status "Ready":"True"
	I0814 16:11:00.421549   21883 pod_ready.go:81] duration metric: took 217.343558ms for pod "kube-scheduler-addons-521895" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:00.421561   21883 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace to be "Ready" ...
	I0814 16:11:00.737427   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:00.740283   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:00.998082   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.320077691s)
	I0814 16:11:00.998122   21883 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.643581233s)
	I0814 16:11:00.998148   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:11:00.998164   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:11:00.998428   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:11:00.998444   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:11:00.998455   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:11:00.998463   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:11:00.998687   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:11:00.998706   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:11:00.998717   21883 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-521895"
	I0814 16:11:01.000172   21883 out.go:177] * Verifying csi-hostpath-driver addon...
	I0814 16:11:01.000178   21883 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0814 16:11:01.002180   21883 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0814 16:11:01.002813   21883 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0814 16:11:01.003370   21883 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0814 16:11:01.003391   21883 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0814 16:11:01.022962   21883 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0814 16:11:01.022982   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:01.066734   21883 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0814 16:11:01.066761   21883 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0814 16:11:01.173697   21883 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0814 16:11:01.173719   21883 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0814 16:11:01.230008   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:01.230091   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:01.294840   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.921327036s)
	I0814 16:11:01.294893   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:11:01.294907   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:11:01.295194   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:11:01.295215   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:11:01.295226   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:11:01.295234   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:11:01.296532   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:11:01.296544   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:11:01.296564   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:11:01.299035   21883 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0814 16:11:01.508024   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:01.732594   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:01.734026   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:02.007288   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:02.226929   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:02.227936   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:02.427845   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:02.513492   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:02.750663   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:02.750935   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:02.798261   21883 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.499186787s)
	I0814 16:11:02.798344   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:11:02.798362   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:11:02.798695   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:11:02.798714   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:11:02.798734   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:11:02.798800   21883 main.go:141] libmachine: Making call to close driver server
	I0814 16:11:02.798818   21883 main.go:141] libmachine: (addons-521895) Calling .Close
	I0814 16:11:02.799075   21883 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:11:02.799121   21883 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:11:02.799105   21883 main.go:141] libmachine: (addons-521895) DBG | Closing plugin on server side
	I0814 16:11:02.801052   21883 addons.go:475] Verifying addon gcp-auth=true in "addons-521895"
	I0814 16:11:02.802582   21883 out.go:177] * Verifying gcp-auth addon...
	I0814 16:11:02.804847   21883 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0814 16:11:02.834593   21883 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0814 16:11:02.834621   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:03.009454   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:03.227644   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:03.228212   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:03.308438   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:03.507500   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:03.727614   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:03.728081   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:03.808381   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:04.007784   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:04.226752   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:04.227128   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:04.308760   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:04.428109   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:04.508069   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:04.726882   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:04.727496   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:04.808712   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:05.007695   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:05.227068   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:05.227235   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:05.319526   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:05.507013   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:05.727193   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:05.728531   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:05.808468   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:06.007202   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:06.228045   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:06.229731   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:06.310151   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:06.509983   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:06.730274   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:06.731864   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:06.809160   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:06.929137   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:07.007623   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:07.226586   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:07.227067   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:07.308599   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:07.507221   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:07.727762   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:07.728072   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:07.808804   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:08.007468   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:08.226459   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:08.226744   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:08.308406   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:08.507197   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:08.726628   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:08.727654   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:08.808701   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:09.007296   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:09.226381   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:09.226555   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:09.308684   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:09.428711   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:09.508071   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:09.727115   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:09.727466   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:09.808296   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:10.008280   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:10.228088   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:10.228620   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:10.308211   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:10.507596   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:10.728403   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:10.728601   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:10.808848   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:11.009852   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:11.226895   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:11.228783   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:11.308710   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:11.508200   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:11.726676   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:11.727270   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:11.808294   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:11.927438   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:12.007223   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:12.228152   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:12.228355   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:12.308991   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:12.506855   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:12.727718   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:12.727962   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:12.808073   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:13.008079   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:13.226418   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:13.226929   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:13.308257   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:13.631274   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:13.727828   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:13.728230   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:13.808572   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:13.927651   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:14.007952   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:14.226942   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:14.227724   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:14.308658   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:14.507920   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:14.726550   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:14.726703   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:14.808016   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:15.110669   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:15.227157   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:15.227366   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:15.309275   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:15.507571   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:16.113840   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:16.115033   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:16.115723   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:16.115815   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:16.125813   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:16.227404   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:16.227711   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:16.308389   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:16.507014   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:16.727511   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:16.727606   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:16.808034   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:17.007828   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:17.231581   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:17.231652   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:17.308472   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:17.507173   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:17.726983   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:17.728532   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:17.808925   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:18.006837   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:18.228895   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:18.229082   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:18.328890   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:18.426854   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:18.507771   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:18.727043   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:18.727486   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:18.807980   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:19.008476   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:19.233287   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:19.233786   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:19.308400   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:19.508874   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:19.727155   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:19.728464   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:19.813584   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:20.006667   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:20.228789   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:20.229342   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:20.312116   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:20.428440   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:20.508181   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:20.727373   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:20.727558   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:20.808847   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:21.007522   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:21.227842   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:21.228018   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:21.308017   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:21.508392   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:21.726964   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:21.728528   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:21.808940   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:22.007424   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:22.294744   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:22.295052   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:22.309891   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:22.506838   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:22.728194   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:22.728419   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:22.811652   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:22.928282   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:23.007367   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:23.227777   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:23.228301   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:23.308649   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:23.508564   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:23.728103   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:23.729155   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:23.808624   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:24.007706   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:24.226403   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:24.227093   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:24.308218   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:24.507457   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:24.727547   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:24.727845   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:24.808409   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:24.928360   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:25.013520   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:25.227947   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:25.227978   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:25.308814   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:25.507178   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:25.727030   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:25.728870   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:25.808339   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:26.007711   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:26.227101   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:26.227625   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:26.309581   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:26.507736   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:26.734799   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:26.735827   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:26.809021   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:27.006897   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:27.235695   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:27.236541   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:27.308369   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:27.428997   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:27.506855   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:27.998154   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:27.998283   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:27.998793   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:28.008923   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:28.227676   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:28.228085   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:28.308586   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:28.507935   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:28.727235   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:28.727573   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:28.807802   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:29.007809   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:29.227035   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:29.227262   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:29.308591   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:29.507286   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:29.727973   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:29.728460   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:29.809110   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:29.927863   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:30.007825   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:30.226841   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:30.227466   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:30.309723   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:30.507783   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:30.727535   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:30.727876   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:30.808532   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:31.007393   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:31.226590   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:31.226916   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:31.308371   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:31.506995   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:31.727621   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:31.727942   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:31.808541   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:31.927990   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:32.006924   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:32.226311   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:32.227284   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:32.308602   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:32.507833   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:32.727489   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:32.727775   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:32.808528   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:33.009003   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:33.227490   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:33.227722   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:33.308208   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:33.507805   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:33.727298   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:33.727534   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:33.807946   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:34.008130   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:34.227582   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:34.227616   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:34.308827   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:34.427547   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:34.507852   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:34.728353   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:34.728545   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:34.808910   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:35.009293   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:35.227629   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:35.228104   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:35.308423   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:35.508122   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:35.725987   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:35.726687   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:35.807739   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:36.008016   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:36.226755   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:36.227768   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:36.308157   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:36.508301   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:36.727117   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:36.727704   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:36.808601   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:36.928194   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:37.015564   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:37.502838   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:37.503012   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:37.503573   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:37.506905   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:37.727062   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:37.727161   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:37.808664   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:38.006884   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:38.228649   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:38.228698   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:38.308726   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:38.512067   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:38.727481   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:38.727658   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:38.808265   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:39.007855   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:39.226864   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:39.227791   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:39.308635   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:39.428084   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:39.506843   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:39.726713   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:39.727285   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:39.808214   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:40.007710   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:40.226737   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:40.228086   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:40.308810   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:40.508440   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:40.727285   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:40.728877   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:40.808438   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:41.007388   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:41.226920   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:41.227146   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:41.308547   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:41.508278   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:41.727886   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:41.728559   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:41.808475   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:41.928095   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:42.007516   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:42.226939   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:42.227988   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:42.312954   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:42.783037   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:42.783561   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:42.783582   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:42.809012   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:43.007865   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:43.226767   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:43.227551   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:43.308859   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:43.508254   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:43.747971   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:43.748157   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:43.808920   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:44.008591   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:44.228920   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:44.229445   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:44.328617   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:44.427970   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:44.508139   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:44.727257   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:44.728178   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:44.808525   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:45.007605   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:45.226095   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:45.226581   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:45.308065   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:45.508306   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:45.728209   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:45.728248   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:45.809174   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:46.007280   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:46.227083   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:46.227624   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:46.307894   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:46.508211   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:46.727191   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:46.728510   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:46.809583   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:46.927496   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:47.007417   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:47.227159   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:47.227595   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:47.309061   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:47.507231   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:47.727788   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:47.728199   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:47.807794   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:48.007775   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:48.227862   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:48.228222   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:48.308550   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:48.506871   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:48.726826   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:48.726994   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:48.811680   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:48.927556   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:49.007557   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:49.228370   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:49.232894   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:49.308737   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:49.507361   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:50.013041   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:50.013700   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:50.014019   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:50.014326   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:50.227635   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:50.227840   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:50.308783   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:50.507852   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:50.728955   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0814 16:11:50.729109   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:50.808880   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:51.007898   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:51.231705   21883 kapi.go:107] duration metric: took 52.009052914s to wait for kubernetes.io/minikube-addons=registry ...
	I0814 16:11:51.232128   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:51.308369   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:51.427088   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:51.506962   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:51.726421   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:51.810635   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:52.008143   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:52.227232   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:52.308638   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:52.508801   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:52.729754   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:52.808553   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:53.007609   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:53.227791   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:53.309397   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:53.428942   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:53.509081   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:53.726629   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:53.810056   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:54.007518   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:54.226467   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:54.308680   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:54.507900   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:54.742125   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:54.828414   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:55.009613   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:55.228673   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:55.327675   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:55.507601   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:55.726945   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:55.808119   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:55.927013   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:56.007706   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:56.232496   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:56.508942   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:56.509556   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:56.733765   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:56.814214   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:57.010484   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:57.227004   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:57.310037   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:57.507550   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:57.726665   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:57.810044   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:57.927270   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:11:58.007900   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:58.226434   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:58.308928   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:58.508230   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:58.728975   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:58.808718   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:59.009541   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:59.229673   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:59.308663   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:59.508513   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:11:59.726940   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:11:59.808273   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:11:59.927683   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:00.013942   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:00.235451   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:00.313101   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:00.508712   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:00.727383   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:00.828093   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:01.010158   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:01.233078   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:01.325043   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:01.510045   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:01.729672   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:01.810179   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:02.007132   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:02.227439   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:02.310266   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:02.431154   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:02.509197   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:02.727286   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:02.808663   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:03.007628   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:03.226511   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:03.308835   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:03.508526   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:03.727177   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:03.810202   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:04.008818   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:04.226738   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:04.308123   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:04.465824   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:04.531436   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:04.730065   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:04.832677   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:05.007811   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:05.227274   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:05.308903   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:05.508052   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:05.727150   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:05.810089   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:06.008440   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:06.227039   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:06.309707   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:06.509388   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:06.726806   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:06.808393   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:06.927154   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:07.007582   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:07.226694   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:07.308577   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:07.508627   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:07.728092   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:07.808983   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:08.007954   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:08.226812   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:08.308652   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:08.507040   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:08.726863   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:08.807988   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:08.973543   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:09.007222   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:09.227526   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:09.309757   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:09.506949   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:09.727245   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:10.033208   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:10.033966   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:10.226506   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:10.308108   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:10.507669   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:10.727045   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:10.808383   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:11.007986   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:11.227279   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:11.308145   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:11.428082   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:11.507064   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:11.726627   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:11.809388   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:12.008074   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:12.227704   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:12.308890   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:12.506591   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:12.726831   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:12.826973   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:13.009678   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:13.227321   21883 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0814 16:12:13.309032   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:13.507107   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:13.729043   21883 kapi.go:107] duration metric: took 1m14.506392378s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0814 16:12:13.809785   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:13.927978   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:14.009075   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:14.310561   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:14.513774   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:14.808323   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:15.007857   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:15.308691   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:15.507626   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:15.808349   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:15.928170   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:16.007672   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:16.308435   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:16.508580   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:16.808612   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:17.007696   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:17.309091   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:17.507099   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:17.809867   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:17.928803   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:18.010797   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:18.308603   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0814 16:12:18.508805   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:18.808770   21883 kapi.go:107] duration metric: took 1m16.003918275s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0814 16:12:18.810337   21883 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-521895 cluster.
	I0814 16:12:18.811598   21883 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0814 16:12:18.812714   21883 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0814 16:12:19.007750   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:19.508433   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:20.008244   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:20.427070   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:20.507532   21883 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0814 16:12:21.008596   21883 kapi.go:107] duration metric: took 1m20.005779592s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0814 16:12:21.010595   21883 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, metrics-server, ingress-dns, inspektor-gadget, helm-tiller, nvidia-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0814 16:12:21.011829   21883 addons.go:510] duration metric: took 1m30.527733509s for enable addons: enabled=[storage-provisioner cloud-spanner metrics-server ingress-dns inspektor-gadget helm-tiller nvidia-device-plugin yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0814 16:12:22.427609   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:24.428953   21883 pod_ready.go:102] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"False"
	I0814 16:12:26.927677   21883 pod_ready.go:92] pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace has status "Ready":"True"
	I0814 16:12:26.927700   21883 pod_ready.go:81] duration metric: took 1m26.506131664s for pod "metrics-server-8988944d9-d5x8v" in "kube-system" namespace to be "Ready" ...
	I0814 16:12:26.927710   21883 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-hb8bq" in "kube-system" namespace to be "Ready" ...
	I0814 16:12:26.932052   21883 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-hb8bq" in "kube-system" namespace has status "Ready":"True"
	I0814 16:12:26.932073   21883 pod_ready.go:81] duration metric: took 4.356748ms for pod "nvidia-device-plugin-daemonset-hb8bq" in "kube-system" namespace to be "Ready" ...
	I0814 16:12:26.932091   21883 pod_ready.go:38] duration metric: took 1m27.698556013s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:12:26.932108   21883 api_server.go:52] waiting for apiserver process to appear ...
	I0814 16:12:26.932132   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 16:12:26.932176   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 16:12:26.974778   21883 cri.go:89] found id: "36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27"
	I0814 16:12:26.974796   21883 cri.go:89] found id: ""
	I0814 16:12:26.974804   21883 logs.go:276] 1 containers: [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27]
	I0814 16:12:26.974844   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:26.979166   21883 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 16:12:26.979230   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 16:12:27.019858   21883 cri.go:89] found id: "9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c"
	I0814 16:12:27.019876   21883 cri.go:89] found id: ""
	I0814 16:12:27.019883   21883 logs.go:276] 1 containers: [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c]
	I0814 16:12:27.019941   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:27.024587   21883 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 16:12:27.024655   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 16:12:27.068624   21883 cri.go:89] found id: "82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363"
	I0814 16:12:27.068649   21883 cri.go:89] found id: ""
	I0814 16:12:27.068656   21883 logs.go:276] 1 containers: [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363]
	I0814 16:12:27.068711   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:27.072802   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 16:12:27.072860   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 16:12:27.112366   21883 cri.go:89] found id: "808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1"
	I0814 16:12:27.112394   21883 cri.go:89] found id: ""
	I0814 16:12:27.112403   21883 logs.go:276] 1 containers: [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1]
	I0814 16:12:27.112493   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:27.117965   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 16:12:27.118020   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 16:12:27.161505   21883 cri.go:89] found id: "230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d"
	I0814 16:12:27.161532   21883 cri.go:89] found id: ""
	I0814 16:12:27.161542   21883 logs.go:276] 1 containers: [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d]
	I0814 16:12:27.161597   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:27.166001   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 16:12:27.166054   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 16:12:27.204852   21883 cri.go:89] found id: "59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0"
	I0814 16:12:27.204876   21883 cri.go:89] found id: ""
	I0814 16:12:27.204885   21883 logs.go:276] 1 containers: [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0]
	I0814 16:12:27.204942   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:27.208816   21883 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 16:12:27.208880   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 16:12:27.246517   21883 cri.go:89] found id: ""
	I0814 16:12:27.246539   21883 logs.go:276] 0 containers: []
	W0814 16:12:27.246547   21883 logs.go:278] No container was found matching "kindnet"
	I0814 16:12:27.246559   21883 logs.go:123] Gathering logs for kubelet ...
	I0814 16:12:27.246572   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 16:12:27.310348   21883 logs.go:138] Found kubelet problem: Aug 14 16:11:02 addons-521895 kubelet[1224]: W0814 16:11:02.702884    1224 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-521895" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-521895' and this object
	W0814 16:12:27.310529   21883 logs.go:138] Found kubelet problem: Aug 14 16:11:02 addons-521895 kubelet[1224]: E0814 16:11:02.702930    1224 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-521895\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-521895' and this object" logger="UnhandledError"
	I0814 16:12:27.337352   21883 logs.go:123] Gathering logs for dmesg ...
	I0814 16:12:27.337387   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 16:12:27.351734   21883 logs.go:123] Gathering logs for kube-scheduler [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1] ...
	I0814 16:12:27.351759   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1"
	I0814 16:12:27.395992   21883 logs.go:123] Gathering logs for kube-controller-manager [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0] ...
	I0814 16:12:27.396032   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0"
	I0814 16:12:27.456704   21883 logs.go:123] Gathering logs for CRI-O ...
	I0814 16:12:27.456738   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 16:12:28.211996   21883 logs.go:123] Gathering logs for container status ...
	I0814 16:12:28.212045   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 16:12:28.274648   21883 logs.go:123] Gathering logs for describe nodes ...
	I0814 16:12:28.274683   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 16:12:28.407272   21883 logs.go:123] Gathering logs for kube-apiserver [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27] ...
	I0814 16:12:28.407311   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27"
	I0814 16:12:28.450978   21883 logs.go:123] Gathering logs for etcd [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c] ...
	I0814 16:12:28.451007   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c"
	I0814 16:12:28.509108   21883 logs.go:123] Gathering logs for coredns [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363] ...
	I0814 16:12:28.509142   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363"
	I0814 16:12:28.549422   21883 logs.go:123] Gathering logs for kube-proxy [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d] ...
	I0814 16:12:28.549450   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d"
	I0814 16:12:28.587741   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:12:28.587766   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 16:12:28.587814   21883 out.go:239] X Problems detected in kubelet:
	W0814 16:12:28.587825   21883 out.go:239]   Aug 14 16:11:02 addons-521895 kubelet[1224]: W0814 16:11:02.702884    1224 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-521895" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-521895' and this object
	W0814 16:12:28.587832   21883 out.go:239]   Aug 14 16:11:02 addons-521895 kubelet[1224]: E0814 16:11:02.702930    1224 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-521895\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-521895' and this object" logger="UnhandledError"
	I0814 16:12:28.587841   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:12:28.587847   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:12:38.589469   21883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:12:38.610407   21883 api_server.go:72] duration metric: took 1m48.126323258s to wait for apiserver process to appear ...
	I0814 16:12:38.610437   21883 api_server.go:88] waiting for apiserver healthz status ...
	I0814 16:12:38.610470   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 16:12:38.610529   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 16:12:38.651554   21883 cri.go:89] found id: "36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27"
	I0814 16:12:38.651640   21883 cri.go:89] found id: ""
	I0814 16:12:38.651655   21883 logs.go:276] 1 containers: [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27]
	I0814 16:12:38.651706   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:38.656520   21883 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 16:12:38.656584   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 16:12:38.701852   21883 cri.go:89] found id: "9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c"
	I0814 16:12:38.701881   21883 cri.go:89] found id: ""
	I0814 16:12:38.701891   21883 logs.go:276] 1 containers: [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c]
	I0814 16:12:38.701938   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:38.705967   21883 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 16:12:38.706028   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 16:12:38.741044   21883 cri.go:89] found id: "82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363"
	I0814 16:12:38.741070   21883 cri.go:89] found id: ""
	I0814 16:12:38.741078   21883 logs.go:276] 1 containers: [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363]
	I0814 16:12:38.741121   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:38.748711   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 16:12:38.748772   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 16:12:38.783408   21883 cri.go:89] found id: "808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1"
	I0814 16:12:38.783427   21883 cri.go:89] found id: ""
	I0814 16:12:38.783434   21883 logs.go:276] 1 containers: [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1]
	I0814 16:12:38.783484   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:38.787452   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 16:12:38.787507   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 16:12:38.824371   21883 cri.go:89] found id: "230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d"
	I0814 16:12:38.824394   21883 cri.go:89] found id: ""
	I0814 16:12:38.824403   21883 logs.go:276] 1 containers: [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d]
	I0814 16:12:38.824457   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:38.828358   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 16:12:38.828426   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 16:12:38.867297   21883 cri.go:89] found id: "59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0"
	I0814 16:12:38.867316   21883 cri.go:89] found id: ""
	I0814 16:12:38.867350   21883 logs.go:276] 1 containers: [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0]
	I0814 16:12:38.867408   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:38.871178   21883 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 16:12:38.871227   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 16:12:38.917341   21883 cri.go:89] found id: ""
	I0814 16:12:38.917367   21883 logs.go:276] 0 containers: []
	W0814 16:12:38.917375   21883 logs.go:278] No container was found matching "kindnet"
	I0814 16:12:38.917382   21883 logs.go:123] Gathering logs for kube-proxy [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d] ...
	I0814 16:12:38.917396   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d"
	I0814 16:12:38.950416   21883 logs.go:123] Gathering logs for kube-controller-manager [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0] ...
	I0814 16:12:38.950449   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0"
	I0814 16:12:39.006272   21883 logs.go:123] Gathering logs for describe nodes ...
	I0814 16:12:39.006302   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 16:12:39.125296   21883 logs.go:123] Gathering logs for kube-scheduler [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1] ...
	I0814 16:12:39.125320   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1"
	I0814 16:12:39.169322   21883 logs.go:123] Gathering logs for kube-apiserver [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27] ...
	I0814 16:12:39.169351   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27"
	I0814 16:12:39.223953   21883 logs.go:123] Gathering logs for etcd [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c] ...
	I0814 16:12:39.223981   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c"
	I0814 16:12:39.292562   21883 logs.go:123] Gathering logs for coredns [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363] ...
	I0814 16:12:39.292593   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363"
	I0814 16:12:39.334707   21883 logs.go:123] Gathering logs for CRI-O ...
	I0814 16:12:39.334730   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 16:12:40.302593   21883 logs.go:123] Gathering logs for container status ...
	I0814 16:12:40.302637   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 16:12:40.355184   21883 logs.go:123] Gathering logs for kubelet ...
	I0814 16:12:40.355211   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 16:12:40.409332   21883 logs.go:138] Found kubelet problem: Aug 14 16:11:02 addons-521895 kubelet[1224]: W0814 16:11:02.702884    1224 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-521895" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-521895' and this object
	W0814 16:12:40.409513   21883 logs.go:138] Found kubelet problem: Aug 14 16:11:02 addons-521895 kubelet[1224]: E0814 16:11:02.702930    1224 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-521895\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-521895' and this object" logger="UnhandledError"
	I0814 16:12:40.437296   21883 logs.go:123] Gathering logs for dmesg ...
	I0814 16:12:40.437320   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 16:12:40.452508   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:12:40.452535   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 16:12:40.452587   21883 out.go:239] X Problems detected in kubelet:
	W0814 16:12:40.452595   21883 out.go:239]   Aug 14 16:11:02 addons-521895 kubelet[1224]: W0814 16:11:02.702884    1224 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-521895" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-521895' and this object
	W0814 16:12:40.452602   21883 out.go:239]   Aug 14 16:11:02 addons-521895 kubelet[1224]: E0814 16:11:02.702930    1224 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-521895\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-521895' and this object" logger="UnhandledError"
	I0814 16:12:40.452609   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:12:40.452615   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:12:50.453536   21883 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0814 16:12:50.460137   21883 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I0814 16:12:50.461166   21883 api_server.go:141] control plane version: v1.31.0
	I0814 16:12:50.461193   21883 api_server.go:131] duration metric: took 11.850743129s to wait for apiserver health ...
	I0814 16:12:50.461201   21883 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 16:12:50.461219   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 16:12:50.461261   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 16:12:50.504955   21883 cri.go:89] found id: "36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27"
	I0814 16:12:50.504974   21883 cri.go:89] found id: ""
	I0814 16:12:50.504981   21883 logs.go:276] 1 containers: [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27]
	I0814 16:12:50.505037   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:50.508772   21883 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 16:12:50.508842   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 16:12:50.543907   21883 cri.go:89] found id: "9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c"
	I0814 16:12:50.543926   21883 cri.go:89] found id: ""
	I0814 16:12:50.543933   21883 logs.go:276] 1 containers: [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c]
	I0814 16:12:50.543976   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:50.547941   21883 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 16:12:50.547994   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 16:12:50.585313   21883 cri.go:89] found id: "82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363"
	I0814 16:12:50.585333   21883 cri.go:89] found id: ""
	I0814 16:12:50.585345   21883 logs.go:276] 1 containers: [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363]
	I0814 16:12:50.585395   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:50.589574   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 16:12:50.589638   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 16:12:50.633893   21883 cri.go:89] found id: "808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1"
	I0814 16:12:50.633910   21883 cri.go:89] found id: ""
	I0814 16:12:50.633917   21883 logs.go:276] 1 containers: [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1]
	I0814 16:12:50.633959   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:50.638080   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 16:12:50.638131   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 16:12:50.684154   21883 cri.go:89] found id: "230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d"
	I0814 16:12:50.684183   21883 cri.go:89] found id: ""
	I0814 16:12:50.684191   21883 logs.go:276] 1 containers: [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d]
	I0814 16:12:50.684245   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:50.689888   21883 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 16:12:50.689951   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 16:12:50.727992   21883 cri.go:89] found id: "59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0"
	I0814 16:12:50.728024   21883 cri.go:89] found id: ""
	I0814 16:12:50.728033   21883 logs.go:276] 1 containers: [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0]
	I0814 16:12:50.728087   21883 ssh_runner.go:195] Run: which crictl
	I0814 16:12:50.732134   21883 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 16:12:50.732200   21883 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 16:12:50.784221   21883 cri.go:89] found id: ""
	I0814 16:12:50.784250   21883 logs.go:276] 0 containers: []
	W0814 16:12:50.784261   21883 logs.go:278] No container was found matching "kindnet"
	I0814 16:12:50.784272   21883 logs.go:123] Gathering logs for kube-scheduler [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1] ...
	I0814 16:12:50.784286   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1"
	I0814 16:12:50.827441   21883 logs.go:123] Gathering logs for container status ...
	I0814 16:12:50.827475   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 16:12:50.873582   21883 logs.go:123] Gathering logs for dmesg ...
	I0814 16:12:50.873611   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 16:12:50.889073   21883 logs.go:123] Gathering logs for etcd [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c] ...
	I0814 16:12:50.889105   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c"
	I0814 16:12:50.948168   21883 logs.go:123] Gathering logs for kube-apiserver [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27] ...
	I0814 16:12:50.948209   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27"
	I0814 16:12:51.008579   21883 logs.go:123] Gathering logs for coredns [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363] ...
	I0814 16:12:51.008609   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363"
	I0814 16:12:51.049331   21883 logs.go:123] Gathering logs for kube-proxy [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d] ...
	I0814 16:12:51.049361   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d"
	I0814 16:12:51.092024   21883 logs.go:123] Gathering logs for kube-controller-manager [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0] ...
	I0814 16:12:51.092051   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0"
	I0814 16:12:51.150309   21883 logs.go:123] Gathering logs for CRI-O ...
	I0814 16:12:51.150342   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 16:12:51.938495   21883 logs.go:123] Gathering logs for kubelet ...
	I0814 16:12:51.938538   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 16:12:51.992141   21883 logs.go:138] Found kubelet problem: Aug 14 16:11:02 addons-521895 kubelet[1224]: W0814 16:11:02.702884    1224 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-521895" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-521895' and this object
	W0814 16:12:51.992325   21883 logs.go:138] Found kubelet problem: Aug 14 16:11:02 addons-521895 kubelet[1224]: E0814 16:11:02.702930    1224 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-521895\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-521895' and this object" logger="UnhandledError"
	I0814 16:12:52.022093   21883 logs.go:123] Gathering logs for describe nodes ...
	I0814 16:12:52.022120   21883 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 16:12:52.162979   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:12:52.163005   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0814 16:12:52.163053   21883 out.go:239] X Problems detected in kubelet:
	W0814 16:12:52.163061   21883 out.go:239]   Aug 14 16:11:02 addons-521895 kubelet[1224]: W0814 16:11:02.702884    1224 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-521895" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-521895' and this object
	W0814 16:12:52.163068   21883 out.go:239]   Aug 14 16:11:02 addons-521895 kubelet[1224]: E0814 16:11:02.702930    1224 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-521895\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-521895' and this object" logger="UnhandledError"
	I0814 16:12:52.163074   21883 out.go:304] Setting ErrFile to fd 2...
	I0814 16:12:52.163079   21883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:13:02.172734   21883 system_pods.go:59] 18 kube-system pods found
	I0814 16:13:02.172763   21883 system_pods.go:61] "coredns-6f6b679f8f-7rf58" [86130fa5-9013-49d5-bc2b-3ddf60ec917a] Running
	I0814 16:13:02.172769   21883 system_pods.go:61] "csi-hostpath-attacher-0" [47d6f4b0-c75e-4ce3-ad8c-2d53e5a19dd4] Running
	I0814 16:13:02.172772   21883 system_pods.go:61] "csi-hostpath-resizer-0" [086e76c0-a74c-44df-9be2-14402f042765] Running
	I0814 16:13:02.172776   21883 system_pods.go:61] "csi-hostpathplugin-z69n6" [e79768e2-a157-4ba9-a9de-eb6315d2700f] Running
	I0814 16:13:02.172779   21883 system_pods.go:61] "etcd-addons-521895" [749e1439-7f4b-4ac1-a469-04f8d4974517] Running
	I0814 16:13:02.172782   21883 system_pods.go:61] "kube-apiserver-addons-521895" [f509e56e-a614-4596-8e29-a6dc0c8e0430] Running
	I0814 16:13:02.172785   21883 system_pods.go:61] "kube-controller-manager-addons-521895" [b1d6cc2e-07cb-4931-a1d3-e3f4c74db5d7] Running
	I0814 16:13:02.172789   21883 system_pods.go:61] "kube-ingress-dns-minikube" [025b355f-aadc-4f6b-a2de-96a654405923] Running
	I0814 16:13:02.172791   21883 system_pods.go:61] "kube-proxy-djhvc" [ca62976b-59e3-41d9-9241-5beb8738bdb4] Running
	I0814 16:13:02.172794   21883 system_pods.go:61] "kube-scheduler-addons-521895" [b4a0abd4-d0df-48f3-b377-4b0678a452c2] Running
	I0814 16:13:02.172796   21883 system_pods.go:61] "metrics-server-8988944d9-d5x8v" [efa28343-d15d-4a26-bc87-4c5c4e6cce30] Running
	I0814 16:13:02.172799   21883 system_pods.go:61] "nvidia-device-plugin-daemonset-hb8bq" [36cab318-9976-4377-b906-b14c2be76513] Running
	I0814 16:13:02.172802   21883 system_pods.go:61] "registry-6fb4cdfc84-lbmb2" [4d1c8ab4-e3b2-4f6d-a2cb-c8356de3d1f8] Running
	I0814 16:13:02.172812   21883 system_pods.go:61] "registry-proxy-rhc59" [3a27fa71-fb85-4942-be2d-fcc16d40a026] Running
	I0814 16:13:02.172816   21883 system_pods.go:61] "snapshot-controller-56fcc65765-9v2kk" [d3a1971c-1a60-4de4-bfc0-aaa22f03cc18] Running
	I0814 16:13:02.172821   21883 system_pods.go:61] "snapshot-controller-56fcc65765-vxxwk" [6fb6d8b0-d7a1-4dee-9f27-b63f3970aa01] Running
	I0814 16:13:02.172825   21883 system_pods.go:61] "storage-provisioner" [582ce9ea-b602-4a47-b4a7-a4b7f8658252] Running
	I0814 16:13:02.172829   21883 system_pods.go:61] "tiller-deploy-b48cc5f79-tjffm" [be865efe-6514-4d4f-b8e3-6c2ccec2e6f2] Running
	I0814 16:13:02.172840   21883 system_pods.go:74] duration metric: took 11.711633618s to wait for pod list to return data ...
	I0814 16:13:02.172851   21883 default_sa.go:34] waiting for default service account to be created ...
	I0814 16:13:02.175052   21883 default_sa.go:45] found service account: "default"
	I0814 16:13:02.175067   21883 default_sa.go:55] duration metric: took 2.210207ms for default service account to be created ...
	I0814 16:13:02.175072   21883 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 16:13:02.184395   21883 system_pods.go:86] 18 kube-system pods found
	I0814 16:13:02.184423   21883 system_pods.go:89] "coredns-6f6b679f8f-7rf58" [86130fa5-9013-49d5-bc2b-3ddf60ec917a] Running
	I0814 16:13:02.184428   21883 system_pods.go:89] "csi-hostpath-attacher-0" [47d6f4b0-c75e-4ce3-ad8c-2d53e5a19dd4] Running
	I0814 16:13:02.184433   21883 system_pods.go:89] "csi-hostpath-resizer-0" [086e76c0-a74c-44df-9be2-14402f042765] Running
	I0814 16:13:02.184437   21883 system_pods.go:89] "csi-hostpathplugin-z69n6" [e79768e2-a157-4ba9-a9de-eb6315d2700f] Running
	I0814 16:13:02.184441   21883 system_pods.go:89] "etcd-addons-521895" [749e1439-7f4b-4ac1-a469-04f8d4974517] Running
	I0814 16:13:02.184446   21883 system_pods.go:89] "kube-apiserver-addons-521895" [f509e56e-a614-4596-8e29-a6dc0c8e0430] Running
	I0814 16:13:02.184450   21883 system_pods.go:89] "kube-controller-manager-addons-521895" [b1d6cc2e-07cb-4931-a1d3-e3f4c74db5d7] Running
	I0814 16:13:02.184454   21883 system_pods.go:89] "kube-ingress-dns-minikube" [025b355f-aadc-4f6b-a2de-96a654405923] Running
	I0814 16:13:02.184458   21883 system_pods.go:89] "kube-proxy-djhvc" [ca62976b-59e3-41d9-9241-5beb8738bdb4] Running
	I0814 16:13:02.184465   21883 system_pods.go:89] "kube-scheduler-addons-521895" [b4a0abd4-d0df-48f3-b377-4b0678a452c2] Running
	I0814 16:13:02.184471   21883 system_pods.go:89] "metrics-server-8988944d9-d5x8v" [efa28343-d15d-4a26-bc87-4c5c4e6cce30] Running
	I0814 16:13:02.184477   21883 system_pods.go:89] "nvidia-device-plugin-daemonset-hb8bq" [36cab318-9976-4377-b906-b14c2be76513] Running
	I0814 16:13:02.184485   21883 system_pods.go:89] "registry-6fb4cdfc84-lbmb2" [4d1c8ab4-e3b2-4f6d-a2cb-c8356de3d1f8] Running
	I0814 16:13:02.184491   21883 system_pods.go:89] "registry-proxy-rhc59" [3a27fa71-fb85-4942-be2d-fcc16d40a026] Running
	I0814 16:13:02.184497   21883 system_pods.go:89] "snapshot-controller-56fcc65765-9v2kk" [d3a1971c-1a60-4de4-bfc0-aaa22f03cc18] Running
	I0814 16:13:02.184501   21883 system_pods.go:89] "snapshot-controller-56fcc65765-vxxwk" [6fb6d8b0-d7a1-4dee-9f27-b63f3970aa01] Running
	I0814 16:13:02.184506   21883 system_pods.go:89] "storage-provisioner" [582ce9ea-b602-4a47-b4a7-a4b7f8658252] Running
	I0814 16:13:02.184510   21883 system_pods.go:89] "tiller-deploy-b48cc5f79-tjffm" [be865efe-6514-4d4f-b8e3-6c2ccec2e6f2] Running
	I0814 16:13:02.184516   21883 system_pods.go:126] duration metric: took 9.438972ms to wait for k8s-apps to be running ...
	I0814 16:13:02.184527   21883 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 16:13:02.184578   21883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:13:02.198917   21883 system_svc.go:56] duration metric: took 14.385391ms WaitForService to wait for kubelet
	I0814 16:13:02.198939   21883 kubeadm.go:582] duration metric: took 2m11.714863667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:13:02.198960   21883 node_conditions.go:102] verifying NodePressure condition ...
	I0814 16:13:02.202210   21883 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 16:13:02.202230   21883 node_conditions.go:123] node cpu capacity is 2
	I0814 16:13:02.202241   21883 node_conditions.go:105] duration metric: took 3.275262ms to run NodePressure ...
	I0814 16:13:02.202251   21883 start.go:241] waiting for startup goroutines ...
	I0814 16:13:02.202257   21883 start.go:246] waiting for cluster config update ...
	I0814 16:13:02.202273   21883 start.go:255] writing updated cluster config ...
	I0814 16:13:02.202543   21883 ssh_runner.go:195] Run: rm -f paused
	I0814 16:13:02.249953   21883 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 16:13:02.252076   21883 out.go:177] * Done! kubectl is now configured to use "addons-521895" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 14 16:18:40 addons-521895 crio[683]: time="2024-08-14 16:18:40.947598150Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652320947569847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51873306-9e6f-4632-a7da-cad781b15cac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:18:40 addons-521895 crio[683]: time="2024-08-14 16:18:40.948233940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7f79b3c-e579-4d9a-950d-2deb593cbea3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:18:40 addons-521895 crio[683]: time="2024-08-14 16:18:40.948325518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7f79b3c-e579-4d9a-950d-2deb593cbea3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:18:40 addons-521895 crio[683]: time="2024-08-14 16:18:40.948579624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47c4168722e1637a10e7c34aaec5fef9a1ace31a05ae182bb2c71a6fb7b6413a,PodSandboxId:635cbf32feea39fe8a44e2b7c25066854454f2fe11a8c77ec7fc1ba58e55ff69,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723652162861464408,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-66swq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ef06ce6-4af7-44ee-b705-3b4afb65b830,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b73bad190ae9c817aee17e2d686fad84ad9d03119f1d456cee173e028381ab,PodSandboxId:8b94b9345aeee5e3e29d23d0d035613e1b5f37d0f80b6d8f32f5a6d6e4de76c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723652024172556749,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0036eca6-d67d-4be0-8ac1-c9992f0e271c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed96fd2b54bfa0808bcdc715a349c08aaf7dc1859be3eb443813a066c53b9963,PodSandboxId:57342eaf36362618a0104852dd1bb86ff6026d34b63a310fb7c0b627b90dbe4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723651985823569019,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bd5e3c0-27de-4eb3-a
bf2-6ec6aaba4d90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e168f3ef7e67469d2d9f4e7ff85b00db25d41c565df9e630e04f88616c903081,PodSandboxId:f68a9f2de09aac1c02fca4b5c99be25dd88b75d8f0d607a6830e1795e8777aef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723651884969924930,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-d5x8v,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: efa28343-d15d-4a26-bc87-4c5c4e6cce30,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d99a130a829f1c499079f67a07aae6c5cd523392184575f72a947658691021a,PodSandboxId:8a0577b5f645ee8536bf328a05a802e79a61956ed2feef0b58a306f32248437e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723651856405063975,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582ce9ea-b602-4a47-b4a7-a4b7f8658252,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363,PodSandboxId:d4a139ca52c61c8840dde82b12112c4321349dc23dab2603d385958c952e7ccb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723651853941681271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-7rf58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86130fa5-9013-49d5-bc2b-3ddf60ec917a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d,PodSandboxId:52036d37c7ebe82c3fe23042360bf609e0ee614eb7325de863eed3ab2e30cde8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723651851693609744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djhvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca62976b-59e3-41d9-9241-5beb8738bdb4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0,PodSandboxId:fc0a277acf0707799158ab115b03d3754d21921f2352952095c9fe662eb4a985,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723651840047578598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a8977315d43d0334fc879b7776f617,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27,PodSandboxId:110f9b94800de7bf90513c3fe06b2fe6526a01c25196a65a2ec4e96b38a0c179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf2
9babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723651839973403882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9d5a3befdc4c50408b6bfa01190b64,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c,PodSandboxId:f5e399ea8b90482e27e59d9367f526326b5d3e41b506c1ec0fb755ded6339eef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_RUNNING,CreatedAt:1723651840014176916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967c8be72d3573e4e486a328526e6b08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1,PodSandboxId:cc8fe13ed7adb3752737ee3cfe0be8e73c84bb2aa633e35d33cae5706b721091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:17236
51839907523946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 678be17c2681820daabe61cccf2292c1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b7f79b3c-e579-4d9a-950d-2deb593cbea3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:18:40 addons-521895 crio[683]: time="2024-08-14 16:18:40.985257521Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=661e85d9-13c0-44c6-b792-c05da353b8b4 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:18:40 addons-521895 crio[683]: time="2024-08-14 16:18:40.985381663Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=661e85d9-13c0-44c6-b792-c05da353b8b4 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:18:40 addons-521895 crio[683]: time="2024-08-14 16:18:40.986248351Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ddfb628-3034-47fe-94f2-4d13c04ecba1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:18:40 addons-521895 crio[683]: time="2024-08-14 16:18:40.987487220Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652320987461891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ddfb628-3034-47fe-94f2-4d13c04ecba1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:18:40 addons-521895 crio[683]: time="2024-08-14 16:18:40.988001219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89bba6de-89d6-45ba-a0ed-e74b4d205eda name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:18:40 addons-521895 crio[683]: time="2024-08-14 16:18:40.988068463Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89bba6de-89d6-45ba-a0ed-e74b4d205eda name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:18:40 addons-521895 crio[683]: time="2024-08-14 16:18:40.988363457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47c4168722e1637a10e7c34aaec5fef9a1ace31a05ae182bb2c71a6fb7b6413a,PodSandboxId:635cbf32feea39fe8a44e2b7c25066854454f2fe11a8c77ec7fc1ba58e55ff69,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723652162861464408,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-66swq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ef06ce6-4af7-44ee-b705-3b4afb65b830,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b73bad190ae9c817aee17e2d686fad84ad9d03119f1d456cee173e028381ab,PodSandboxId:8b94b9345aeee5e3e29d23d0d035613e1b5f37d0f80b6d8f32f5a6d6e4de76c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723652024172556749,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0036eca6-d67d-4be0-8ac1-c9992f0e271c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed96fd2b54bfa0808bcdc715a349c08aaf7dc1859be3eb443813a066c53b9963,PodSandboxId:57342eaf36362618a0104852dd1bb86ff6026d34b63a310fb7c0b627b90dbe4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723651985823569019,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bd5e3c0-27de-4eb3-a
bf2-6ec6aaba4d90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e168f3ef7e67469d2d9f4e7ff85b00db25d41c565df9e630e04f88616c903081,PodSandboxId:f68a9f2de09aac1c02fca4b5c99be25dd88b75d8f0d607a6830e1795e8777aef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723651884969924930,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-d5x8v,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: efa28343-d15d-4a26-bc87-4c5c4e6cce30,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d99a130a829f1c499079f67a07aae6c5cd523392184575f72a947658691021a,PodSandboxId:8a0577b5f645ee8536bf328a05a802e79a61956ed2feef0b58a306f32248437e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723651856405063975,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582ce9ea-b602-4a47-b4a7-a4b7f8658252,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363,PodSandboxId:d4a139ca52c61c8840dde82b12112c4321349dc23dab2603d385958c952e7ccb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723651853941681271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-7rf58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86130fa5-9013-49d5-bc2b-3ddf60ec917a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d,PodSandboxId:52036d37c7ebe82c3fe23042360bf609e0ee614eb7325de863eed3ab2e30cde8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723651851693609744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djhvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca62976b-59e3-41d9-9241-5beb8738bdb4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0,PodSandboxId:fc0a277acf0707799158ab115b03d3754d21921f2352952095c9fe662eb4a985,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723651840047578598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a8977315d43d0334fc879b7776f617,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27,PodSandboxId:110f9b94800de7bf90513c3fe06b2fe6526a01c25196a65a2ec4e96b38a0c179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf2
9babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723651839973403882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9d5a3befdc4c50408b6bfa01190b64,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c,PodSandboxId:f5e399ea8b90482e27e59d9367f526326b5d3e41b506c1ec0fb755ded6339eef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_RUNNING,CreatedAt:1723651840014176916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967c8be72d3573e4e486a328526e6b08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1,PodSandboxId:cc8fe13ed7adb3752737ee3cfe0be8e73c84bb2aa633e35d33cae5706b721091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:17236
51839907523946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 678be17c2681820daabe61cccf2292c1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89bba6de-89d6-45ba-a0ed-e74b4d205eda name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.023464546Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6dc74fda-9478-4568-a5f3-094f2c1d3f7f name=/runtime.v1.RuntimeService/Version
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.023538815Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6dc74fda-9478-4568-a5f3-094f2c1d3f7f name=/runtime.v1.RuntimeService/Version
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.024837485Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75da6c84-f182-467b-acd1-4f9ef4b4f306 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.026223436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652321026190579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75da6c84-f182-467b-acd1-4f9ef4b4f306 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.026894051Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb483159-8c04-4b8e-842a-44197a2ec6f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.027025320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb483159-8c04-4b8e-842a-44197a2ec6f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.027377414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47c4168722e1637a10e7c34aaec5fef9a1ace31a05ae182bb2c71a6fb7b6413a,PodSandboxId:635cbf32feea39fe8a44e2b7c25066854454f2fe11a8c77ec7fc1ba58e55ff69,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723652162861464408,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-66swq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ef06ce6-4af7-44ee-b705-3b4afb65b830,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b73bad190ae9c817aee17e2d686fad84ad9d03119f1d456cee173e028381ab,PodSandboxId:8b94b9345aeee5e3e29d23d0d035613e1b5f37d0f80b6d8f32f5a6d6e4de76c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723652024172556749,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0036eca6-d67d-4be0-8ac1-c9992f0e271c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed96fd2b54bfa0808bcdc715a349c08aaf7dc1859be3eb443813a066c53b9963,PodSandboxId:57342eaf36362618a0104852dd1bb86ff6026d34b63a310fb7c0b627b90dbe4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723651985823569019,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bd5e3c0-27de-4eb3-a
bf2-6ec6aaba4d90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e168f3ef7e67469d2d9f4e7ff85b00db25d41c565df9e630e04f88616c903081,PodSandboxId:f68a9f2de09aac1c02fca4b5c99be25dd88b75d8f0d607a6830e1795e8777aef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723651884969924930,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-d5x8v,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: efa28343-d15d-4a26-bc87-4c5c4e6cce30,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d99a130a829f1c499079f67a07aae6c5cd523392184575f72a947658691021a,PodSandboxId:8a0577b5f645ee8536bf328a05a802e79a61956ed2feef0b58a306f32248437e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723651856405063975,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582ce9ea-b602-4a47-b4a7-a4b7f8658252,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363,PodSandboxId:d4a139ca52c61c8840dde82b12112c4321349dc23dab2603d385958c952e7ccb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723651853941681271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-7rf58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86130fa5-9013-49d5-bc2b-3ddf60ec917a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d,PodSandboxId:52036d37c7ebe82c3fe23042360bf609e0ee614eb7325de863eed3ab2e30cde8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723651851693609744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djhvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca62976b-59e3-41d9-9241-5beb8738bdb4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0,PodSandboxId:fc0a277acf0707799158ab115b03d3754d21921f2352952095c9fe662eb4a985,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723651840047578598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a8977315d43d0334fc879b7776f617,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27,PodSandboxId:110f9b94800de7bf90513c3fe06b2fe6526a01c25196a65a2ec4e96b38a0c179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf2
9babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723651839973403882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9d5a3befdc4c50408b6bfa01190b64,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c,PodSandboxId:f5e399ea8b90482e27e59d9367f526326b5d3e41b506c1ec0fb755ded6339eef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_RUNNING,CreatedAt:1723651840014176916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967c8be72d3573e4e486a328526e6b08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1,PodSandboxId:cc8fe13ed7adb3752737ee3cfe0be8e73c84bb2aa633e35d33cae5706b721091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:17236
51839907523946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 678be17c2681820daabe61cccf2292c1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb483159-8c04-4b8e-842a-44197a2ec6f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.057616136Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=af6b3e4d-bc5c-4e92-9620-9ce8d8d68a97 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.057705860Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af6b3e4d-bc5c-4e92-9620-9ce8d8d68a97 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.058617594Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75553ec3-f65c-4e75-9933-70f6e9ceadfe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.060017953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652321059993263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75553ec3-f65c-4e75-9933-70f6e9ceadfe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.060584400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f51340d6-1b64-4018-abaf-5ab698b8ed3f name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.060681814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f51340d6-1b64-4018-abaf-5ab698b8ed3f name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:18:41 addons-521895 crio[683]: time="2024-08-14 16:18:41.061001858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:47c4168722e1637a10e7c34aaec5fef9a1ace31a05ae182bb2c71a6fb7b6413a,PodSandboxId:635cbf32feea39fe8a44e2b7c25066854454f2fe11a8c77ec7fc1ba58e55ff69,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723652162861464408,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-66swq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ef06ce6-4af7-44ee-b705-3b4afb65b830,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b73bad190ae9c817aee17e2d686fad84ad9d03119f1d456cee173e028381ab,PodSandboxId:8b94b9345aeee5e3e29d23d0d035613e1b5f37d0f80b6d8f32f5a6d6e4de76c1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723652024172556749,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0036eca6-d67d-4be0-8ac1-c9992f0e271c,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed96fd2b54bfa0808bcdc715a349c08aaf7dc1859be3eb443813a066c53b9963,PodSandboxId:57342eaf36362618a0104852dd1bb86ff6026d34b63a310fb7c0b627b90dbe4b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723651985823569019,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9bd5e3c0-27de-4eb3-a
bf2-6ec6aaba4d90,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e168f3ef7e67469d2d9f4e7ff85b00db25d41c565df9e630e04f88616c903081,PodSandboxId:f68a9f2de09aac1c02fca4b5c99be25dd88b75d8f0d607a6830e1795e8777aef,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723651884969924930,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-d5x8v,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: efa28343-d15d-4a26-bc87-4c5c4e6cce30,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d99a130a829f1c499079f67a07aae6c5cd523392184575f72a947658691021a,PodSandboxId:8a0577b5f645ee8536bf328a05a802e79a61956ed2feef0b58a306f32248437e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723651856405063975,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582ce9ea-b602-4a47-b4a7-a4b7f8658252,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363,PodSandboxId:d4a139ca52c61c8840dde82b12112c4321349dc23dab2603d385958c952e7ccb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723651853941681271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-7rf58,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86130fa5-9013-49d5-bc2b-3ddf60ec917a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d,PodSandboxId:52036d37c7ebe82c3fe23042360bf609e0ee614eb7325de863eed3ab2e30cde8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723651851693609744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djhvc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca62976b-59e3-41d9-9241-5beb8738bdb4,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0,PodSandboxId:fc0a277acf0707799158ab115b03d3754d21921f2352952095c9fe662eb4a985,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723651840047578598,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90a8977315d43d0334fc879b7776f617,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27,PodSandboxId:110f9b94800de7bf90513c3fe06b2fe6526a01c25196a65a2ec4e96b38a0c179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf2
9babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723651839973403882,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a9d5a3befdc4c50408b6bfa01190b64,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c,PodSandboxId:f5e399ea8b90482e27e59d9367f526326b5d3e41b506c1ec0fb755ded6339eef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,St
ate:CONTAINER_RUNNING,CreatedAt:1723651840014176916,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 967c8be72d3573e4e486a328526e6b08,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1,PodSandboxId:cc8fe13ed7adb3752737ee3cfe0be8e73c84bb2aa633e35d33cae5706b721091,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:17236
51839907523946,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-521895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 678be17c2681820daabe61cccf2292c1,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f51340d6-1b64-4018-abaf-5ab698b8ed3f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	47c4168722e16       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   635cbf32feea3       hello-world-app-55bf9c44b4-66swq
	88b73bad190ae       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago       Running             nginx                     0                   8b94b9345aeee       nginx
	ed96fd2b54bfa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   57342eaf36362       busybox
	e168f3ef7e674       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   f68a9f2de09aa       metrics-server-8988944d9-d5x8v
	8d99a130a829f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   8a0577b5f645e       storage-provisioner
	82e1477a10cc7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   d4a139ca52c61       coredns-6f6b679f8f-7rf58
	230305fe29454       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        7 minutes ago       Running             kube-proxy                0                   52036d37c7ebe       kube-proxy-djhvc
	59a7d413ae30c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        8 minutes ago       Running             kube-controller-manager   0                   fc0a277acf070       kube-controller-manager-addons-521895
	9ab2a01dd198e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   f5e399ea8b904       etcd-addons-521895
	36daf0f60c2e9       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        8 minutes ago       Running             kube-apiserver            0                   110f9b94800de       kube-apiserver-addons-521895
	808f6e1d6cb54       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        8 minutes ago       Running             kube-scheduler            0                   cc8fe13ed7adb       kube-scheduler-addons-521895
	
	
	==> coredns [82e1477a10cc734a9bb1f3a946272f009596206da2a97ac8b4de46bef5fa9363] <==
	[INFO] 10.244.0.8:51757 - 19041 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.002086208s
	[INFO] 10.244.0.8:44376 - 30158 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000170383s
	[INFO] 10.244.0.8:44376 - 3011 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000206073s
	[INFO] 10.244.0.8:43200 - 54742 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000137727s
	[INFO] 10.244.0.8:43200 - 37336 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094919s
	[INFO] 10.244.0.8:59625 - 24812 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148285s
	[INFO] 10.244.0.8:59625 - 8426 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000292179s
	[INFO] 10.244.0.8:50221 - 61004 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000081016s
	[INFO] 10.244.0.8:50221 - 27443 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000138708s
	[INFO] 10.244.0.8:39268 - 21789 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00005817s
	[INFO] 10.244.0.8:39268 - 40467 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000133597s
	[INFO] 10.244.0.8:52216 - 27921 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048015s
	[INFO] 10.244.0.8:52216 - 38935 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084291s
	[INFO] 10.244.0.8:54703 - 45434 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000063555s
	[INFO] 10.244.0.8:54703 - 57460 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000144904s
	[INFO] 10.244.0.22:58079 - 46803 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000450458s
	[INFO] 10.244.0.22:58414 - 29689 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000080494s
	[INFO] 10.244.0.22:60407 - 58327 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100183s
	[INFO] 10.244.0.22:37758 - 11954 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000197967s
	[INFO] 10.244.0.22:51074 - 50169 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076677s
	[INFO] 10.244.0.22:48023 - 37918 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00005431s
	[INFO] 10.244.0.22:32981 - 9654 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000868448s
	[INFO] 10.244.0.22:45595 - 27232 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000413993s
	[INFO] 10.244.0.24:34296 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000377869s
	[INFO] 10.244.0.24:49396 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106619s
	
	
	==> describe nodes <==
	Name:               addons-521895
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-521895
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=addons-521895
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T16_10_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-521895
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:10:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-521895
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:18:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:16:22 +0000   Wed, 14 Aug 2024 16:10:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:16:22 +0000   Wed, 14 Aug 2024 16:10:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:16:22 +0000   Wed, 14 Aug 2024 16:10:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:16:22 +0000   Wed, 14 Aug 2024 16:10:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    addons-521895
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 66c4c615e22741dfb4a932e29dcfcd60
	  System UUID:                66c4c615-e227-41df-b4a9-32e29dcfcd60
	  Boot ID:                    82d08b09-812d-45fd-ab2e-b0075dfc9acb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  default                     hello-world-app-55bf9c44b4-66swq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 coredns-6f6b679f8f-7rf58                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m51s
	  kube-system                 etcd-addons-521895                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m58s
	  kube-system                 kube-apiserver-addons-521895             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  kube-system                 kube-controller-manager-addons-521895    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 kube-proxy-djhvc                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 kube-scheduler-addons-521895             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 metrics-server-8988944d9-d5x8v           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m45s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m48s  kube-proxy       
	  Normal  Starting                 7m56s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m56s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m56s  kubelet          Node addons-521895 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m56s  kubelet          Node addons-521895 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m56s  kubelet          Node addons-521895 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m55s  kubelet          Node addons-521895 status is now: NodeReady
	  Normal  RegisteredNode           7m52s  node-controller  Node addons-521895 event: Registered Node addons-521895 in Controller
	
	
	==> dmesg <==
	[  +5.692772] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.354953] kauditd_printk_skb: 32 callbacks suppressed
	[ +12.000746] kauditd_printk_skb: 20 callbacks suppressed
	[Aug14 16:12] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.543087] kauditd_printk_skb: 55 callbacks suppressed
	[  +6.043659] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.082876] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.461286] kauditd_printk_skb: 40 callbacks suppressed
	[Aug14 16:13] kauditd_printk_skb: 28 callbacks suppressed
	[ +13.100045] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.768894] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.304287] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.360381] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.735512] kauditd_printk_skb: 47 callbacks suppressed
	[  +6.058200] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.055839] kauditd_printk_skb: 8 callbacks suppressed
	[Aug14 16:14] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.880346] kauditd_printk_skb: 34 callbacks suppressed
	[  +8.305462] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.783853] kauditd_printk_skb: 6 callbacks suppressed
	[ +10.783087] kauditd_printk_skb: 24 callbacks suppressed
	[ +13.484468] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.097879] kauditd_printk_skb: 13 callbacks suppressed
	[Aug14 16:15] kauditd_printk_skb: 4 callbacks suppressed
	[Aug14 16:16] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [9ab2a01dd198e8125707403e70229c89b51636d7906d1f7f473df4ea1e93863c] <==
	{"level":"info","ts":"2024-08-14T16:11:56.492883Z","caller":"traceutil/trace.go:171","msg":"trace[173330722] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1020; }","duration":"151.339798ms","start":"2024-08-14T16:11:56.341531Z","end":"2024-08-14T16:11:56.492871Z","steps":["trace[173330722] 'range keys from in-memory index tree'  (duration: 151.251861ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:11:56.492903Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.857827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:11:56.492938Z","caller":"traceutil/trace.go:171","msg":"trace[1475812983] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1020; }","duration":"231.90114ms","start":"2024-08-14T16:11:56.261027Z","end":"2024-08-14T16:11:56.492928Z","steps":["trace[1475812983] 'range keys from in-memory index tree'  (duration: 231.786452ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:11:56.493042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.58381ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:11:56.493056Z","caller":"traceutil/trace.go:171","msg":"trace[2089169036] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1020; }","duration":"197.598653ms","start":"2024-08-14T16:11:56.295453Z","end":"2024-08-14T16:11:56.493051Z","steps":["trace[2089169036] 'range keys from in-memory index tree'  (duration: 197.508719ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:11:56.717488Z","caller":"traceutil/trace.go:171","msg":"trace[27944803] transaction","detail":"{read_only:false; response_revision:1021; number_of_response:1; }","duration":"222.449565ms","start":"2024-08-14T16:11:56.495011Z","end":"2024-08-14T16:11:56.717461Z","steps":["trace[27944803] 'process raft request'  (duration: 222.369686ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:11:56.717797Z","caller":"traceutil/trace.go:171","msg":"trace[615001554] linearizableReadLoop","detail":"{readStateIndex:1054; appliedIndex:1054; }","duration":"219.746386ms","start":"2024-08-14T16:11:56.498042Z","end":"2024-08-14T16:11:56.717789Z","steps":["trace[615001554] 'read index received'  (duration: 219.743127ms)","trace[615001554] 'applied index is now lower than readState.Index'  (duration: 2.506µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T16:11:56.718110Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.050868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-521895\" ","response":"range_response_count:1 size:7359"}
	{"level":"info","ts":"2024-08-14T16:11:56.718159Z","caller":"traceutil/trace.go:171","msg":"trace[1889545156] range","detail":"{range_begin:/registry/minions/addons-521895; range_end:; response_count:1; response_revision:1021; }","duration":"220.111988ms","start":"2024-08-14T16:11:56.498040Z","end":"2024-08-14T16:11:56.718152Z","steps":["trace[1889545156] 'agreement among raft nodes before linearized reading'  (duration: 219.992418ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:12:10.015781Z","caller":"traceutil/trace.go:171","msg":"trace[115314874] linearizableReadLoop","detail":"{readStateIndex:1161; appliedIndex:1160; }","duration":"221.021506ms","start":"2024-08-14T16:12:09.794737Z","end":"2024-08-14T16:12:10.015759Z","steps":["trace[115314874] 'read index received'  (duration: 217.385432ms)","trace[115314874] 'applied index is now lower than readState.Index'  (duration: 3.635095ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T16:12:10.015954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.159667ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:12:10.016061Z","caller":"traceutil/trace.go:171","msg":"trace[958517782] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1125; }","duration":"221.338992ms","start":"2024-08-14T16:12:09.794711Z","end":"2024-08-14T16:12:10.016050Z","steps":["trace[958517782] 'agreement among raft nodes before linearized reading'  (duration: 221.129195ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:12:10.016215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.967202ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-8988944d9-d5x8v\" ","response":"range_response_count:1 size:4561"}
	{"level":"info","ts":"2024-08-14T16:12:10.016317Z","caller":"traceutil/trace.go:171","msg":"trace[1282319182] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-8988944d9-d5x8v; range_end:; response_count:1; response_revision:1125; }","duration":"105.076127ms","start":"2024-08-14T16:12:09.911230Z","end":"2024-08-14T16:12:10.016306Z","steps":["trace[1282319182] 'agreement among raft nodes before linearized reading'  (duration: 104.745964ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:13:49.400333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.81349ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:13:49.400470Z","caller":"traceutil/trace.go:171","msg":"trace[329974029] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1551; }","duration":"149.027895ms","start":"2024-08-14T16:13:49.251423Z","end":"2024-08-14T16:13:49.400451Z","steps":["trace[329974029] 'range keys from in-memory index tree'  (duration: 148.765329ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:13:49.400334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.247976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:13:49.400658Z","caller":"traceutil/trace.go:171","msg":"trace[910928357] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1551; }","duration":"362.690859ms","start":"2024-08-14T16:13:49.037960Z","end":"2024-08-14T16:13:49.400651Z","steps":["trace[910928357] 'count revisions from in-memory index tree'  (duration: 362.173482ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:13:49.400690Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T16:13:49.037909Z","time spent":"362.764802ms","remote":"127.0.0.1:46420","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":0,"response size":27,"request content":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true "}
	{"level":"info","ts":"2024-08-14T16:14:20.788670Z","caller":"traceutil/trace.go:171","msg":"trace[1784458280] transaction","detail":"{read_only:false; response_revision:1788; number_of_response:1; }","duration":"188.584509ms","start":"2024-08-14T16:14:20.600067Z","end":"2024-08-14T16:14:20.788651Z","steps":["trace[1784458280] 'process raft request'  (duration: 188.46064ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:14:20.789103Z","caller":"traceutil/trace.go:171","msg":"trace[143668891] linearizableReadLoop","detail":"{readStateIndex:1862; appliedIndex:1861; }","duration":"121.180081ms","start":"2024-08-14T16:14:20.667910Z","end":"2024-08-14T16:14:20.789090Z","steps":["trace[143668891] 'read index received'  (duration: 120.539726ms)","trace[143668891] 'applied index is now lower than readState.Index'  (duration: 639.397µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T16:14:20.789197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.272524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/statefulsets/kube-system/csi-hostpath-resizer\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:14:20.789217Z","caller":"traceutil/trace.go:171","msg":"trace[1211386156] range","detail":"{range_begin:/registry/statefulsets/kube-system/csi-hostpath-resizer; range_end:; response_count:0; response_revision:1788; }","duration":"121.305069ms","start":"2024-08-14T16:14:20.667905Z","end":"2024-08-14T16:14:20.789210Z","steps":["trace[1211386156] 'agreement among raft nodes before linearized reading'  (duration: 121.249912ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:14:37.836386Z","caller":"traceutil/trace.go:171","msg":"trace[208046168] transaction","detail":"{read_only:false; response_revision:1881; number_of_response:1; }","duration":"120.759975ms","start":"2024-08-14T16:14:37.715605Z","end":"2024-08-14T16:14:37.836365Z","steps":["trace[208046168] 'process raft request'  (duration: 120.448799ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:15:11.940587Z","caller":"traceutil/trace.go:171","msg":"trace[1875794507] transaction","detail":"{read_only:false; response_revision:1978; number_of_response:1; }","duration":"106.178787ms","start":"2024-08-14T16:15:11.834394Z","end":"2024-08-14T16:15:11.940573Z","steps":["trace[1875794507] 'process raft request'  (duration: 106.045035ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:18:41 up 8 min,  0 users,  load average: 0.27, 0.79, 0.59
	Linux addons-521895 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [36daf0f60c2e926e79ca539ab6cb1a8f8339c60671b666f81cdba5eba289ba27] <==
	E0814 16:13:12.723503       1 conn.go:339] Error on socket receive: read tcp 192.168.39.170:8443->192.168.39.1:33544: use of closed network connection
	E0814 16:13:12.915570       1 conn.go:339] Error on socket receive: read tcp 192.168.39.170:8443->192.168.39.1:33570: use of closed network connection
	I0814 16:13:37.481555       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0814 16:13:38.527236       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0814 16:13:39.601821       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0814 16:13:39.786065       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.213.200"}
	I0814 16:13:57.570930       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0814 16:14:18.065898       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0814 16:14:21.449363       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:14:21.449421       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:14:21.480183       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:14:21.481396       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:14:21.502996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:14:21.503109       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:14:21.541863       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:14:21.541992       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0814 16:14:21.578885       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0814 16:14:21.578930       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0814 16:14:22.543009       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0814 16:14:22.579352       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0814 16:14:22.643825       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0814 16:14:33.769632       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.133.76"}
	E0814 16:14:55.092243       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.170:8443->10.244.0.32:40550: read: connection reset by peer
	I0814 16:16:00.170212       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.235.38"}
	E0814 16:16:02.286020       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [59a7d413ae30c45a10011ff7e6cb6787f7e23aa6e7baff938621ce36e22c8cf0] <==
	W0814 16:16:22.822483       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:16:22.822537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:16:24.964317       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:16:24.964463       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:17:03.815833       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:17:03.815989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:17:09.817621       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:17:09.817862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:17:14.540141       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:17:14.540365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:17:18.892108       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:17:18.892172       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:17:34.826152       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:17:34.826247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:17:51.790086       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:17:51.790206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:17:56.795449       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:17:56.795581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:18:04.295233       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:18:04.295319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:18:28.845371       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:18:28.845653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0814 16:18:37.929374       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0814 16:18:37.929495       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0814 16:18:40.104505       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="12.447µs"
	
	
	==> kube-proxy [230305fe29454b85326a4f4fad0d6cd292c63c50e294fca31428140c4ecfe30d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 16:10:52.651643       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 16:10:52.661913       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.170"]
	E0814 16:10:52.661991       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 16:10:52.747088       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 16:10:52.747138       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 16:10:52.747168       1 server_linux.go:169] "Using iptables Proxier"
	I0814 16:10:52.749869       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 16:10:52.750191       1 server.go:483] "Version info" version="v1.31.0"
	I0814 16:10:52.750216       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:10:52.752042       1 config.go:197] "Starting service config controller"
	I0814 16:10:52.752081       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 16:10:52.752101       1 config.go:104] "Starting endpoint slice config controller"
	I0814 16:10:52.752106       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 16:10:52.752612       1 config.go:326] "Starting node config controller"
	I0814 16:10:52.752640       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 16:10:52.852251       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 16:10:52.852337       1 shared_informer.go:320] Caches are synced for service config
	I0814 16:10:52.854608       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [808f6e1d6cb54eff3e40da317031b90b9e5ec59c65f63ee512b58a50896c43c1] <==
	W0814 16:10:42.832853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:42.832884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:42.832969       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 16:10:42.832999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:42.833099       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 16:10:42.833130       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:43.695970       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:43.696022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:43.703334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 16:10:43.703374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:43.734819       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 16:10:43.734940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:43.777951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 16:10:43.778091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:43.865104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 16:10:43.865207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:44.009179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 16:10:44.009426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:44.010004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 16:10:44.010167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:44.013458       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 16:10:44.013495       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:10:44.098686       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 16:10:44.098854       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0814 16:10:47.124245       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 16:17:45 addons-521895 kubelet[1224]: E0814 16:17:45.558875    1224 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 16:17:45 addons-521895 kubelet[1224]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 16:17:45 addons-521895 kubelet[1224]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 16:17:45 addons-521895 kubelet[1224]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 16:17:45 addons-521895 kubelet[1224]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 16:17:45 addons-521895 kubelet[1224]: E0814 16:17:45.969475    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652265969073068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:45 addons-521895 kubelet[1224]: E0814 16:17:45.969513    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652265969073068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:55 addons-521895 kubelet[1224]: E0814 16:17:55.972429    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652275971926920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:17:55 addons-521895 kubelet[1224]: E0814 16:17:55.972466    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652275971926920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:18:05 addons-521895 kubelet[1224]: E0814 16:18:05.977398    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652285976812069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:18:05 addons-521895 kubelet[1224]: E0814 16:18:05.977475    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652285976812069,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:18:06 addons-521895 kubelet[1224]: I0814 16:18:06.520687    1224 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 14 16:18:15 addons-521895 kubelet[1224]: E0814 16:18:15.980406    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652295979960439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:18:15 addons-521895 kubelet[1224]: E0814 16:18:15.980442    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652295979960439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:18:25 addons-521895 kubelet[1224]: E0814 16:18:25.983369    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652305982936174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:18:25 addons-521895 kubelet[1224]: E0814 16:18:25.983409    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652305982936174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:18:35 addons-521895 kubelet[1224]: E0814 16:18:35.986618    1224 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652315986086324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:18:35 addons-521895 kubelet[1224]: E0814 16:18:35.986937    1224 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723652315986086324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590422,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:18:40 addons-521895 kubelet[1224]: I0814 16:18:40.130603    1224 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-66swq" podStartSLOduration=158.840154454 podStartE2EDuration="2m41.130579058s" podCreationTimestamp="2024-08-14 16:15:59 +0000 UTC" firstStartedPulling="2024-08-14 16:16:00.554784988 +0000 UTC m=+315.167110251" lastFinishedPulling="2024-08-14 16:16:02.845209591 +0000 UTC m=+317.457534855" observedRunningTime="2024-08-14 16:16:03.605014459 +0000 UTC m=+318.217339743" watchObservedRunningTime="2024-08-14 16:18:40.130579058 +0000 UTC m=+474.742904333"
	Aug 14 16:18:41 addons-521895 kubelet[1224]: I0814 16:18:41.513776    1224 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/efa28343-d15d-4a26-bc87-4c5c4e6cce30-tmp-dir\") pod \"efa28343-d15d-4a26-bc87-4c5c4e6cce30\" (UID: \"efa28343-d15d-4a26-bc87-4c5c4e6cce30\") "
	Aug 14 16:18:41 addons-521895 kubelet[1224]: I0814 16:18:41.513832    1224 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncdb8\" (UniqueName: \"kubernetes.io/projected/efa28343-d15d-4a26-bc87-4c5c4e6cce30-kube-api-access-ncdb8\") pod \"efa28343-d15d-4a26-bc87-4c5c4e6cce30\" (UID: \"efa28343-d15d-4a26-bc87-4c5c4e6cce30\") "
	Aug 14 16:18:41 addons-521895 kubelet[1224]: I0814 16:18:41.514198    1224 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efa28343-d15d-4a26-bc87-4c5c4e6cce30-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "efa28343-d15d-4a26-bc87-4c5c4e6cce30" (UID: "efa28343-d15d-4a26-bc87-4c5c4e6cce30"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 14 16:18:41 addons-521895 kubelet[1224]: I0814 16:18:41.524876    1224 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efa28343-d15d-4a26-bc87-4c5c4e6cce30-kube-api-access-ncdb8" (OuterVolumeSpecName: "kube-api-access-ncdb8") pod "efa28343-d15d-4a26-bc87-4c5c4e6cce30" (UID: "efa28343-d15d-4a26-bc87-4c5c4e6cce30"). InnerVolumeSpecName "kube-api-access-ncdb8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 14 16:18:41 addons-521895 kubelet[1224]: I0814 16:18:41.615056    1224 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ncdb8\" (UniqueName: \"kubernetes.io/projected/efa28343-d15d-4a26-bc87-4c5c4e6cce30-kube-api-access-ncdb8\") on node \"addons-521895\" DevicePath \"\""
	Aug 14 16:18:41 addons-521895 kubelet[1224]: I0814 16:18:41.615114    1224 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/efa28343-d15d-4a26-bc87-4c5c4e6cce30-tmp-dir\") on node \"addons-521895\" DevicePath \"\""
	
	
	==> storage-provisioner [8d99a130a829f1c499079f67a07aae6c5cd523392184575f72a947658691021a] <==
	I0814 16:10:56.980080       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 16:10:57.074006       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 16:10:57.074083       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 16:10:57.095629       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 16:10:57.095836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-521895_7d037236-7a9e-4cc2-b0a9-01f811657084!
	I0814 16:10:57.122404       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f0a62dbb-6b92-4b4d-ba20-4ea5c75c1d2d", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-521895_7d037236-7a9e-4cc2-b0a9-01f811657084 became leader
	I0814 16:10:57.196301       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-521895_7d037236-7a9e-4cc2-b0a9-01f811657084!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-521895 -n addons-521895
helpers_test.go:261: (dbg) Run:  kubectl --context addons-521895 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-8988944d9-d5x8v
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-521895 describe pod metrics-server-8988944d9-d5x8v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-521895 describe pod metrics-server-8988944d9-d5x8v: exit status 1 (62.765867ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8988944d9-d5x8v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-521895 describe pod metrics-server-8988944d9-d5x8v: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (321.03s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-521895
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-521895: exit status 82 (2m0.476385845s)

                                                
                                                
-- stdout --
	* Stopping node "addons-521895"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-521895" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-521895
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-521895: exit status 11 (21.519090219s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-521895" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-521895
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-521895: exit status 11 (6.144344369s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-521895" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-521895
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-521895: exit status 11 (6.143803017s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.170:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-521895" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 node stop m02 -v=7 --alsologtostderr
E0814 16:30:51.398522   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:32:13.320548   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-597780 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.468961473s)

                                                
                                                
-- stdout --
	* Stopping node "ha-597780-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:30:33.354836   36080 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:30:33.355074   36080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:30:33.355083   36080 out.go:304] Setting ErrFile to fd 2...
	I0814 16:30:33.355087   36080 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:30:33.355267   36080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:30:33.355560   36080 mustload.go:65] Loading cluster: ha-597780
	I0814 16:30:33.355912   36080 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:30:33.355926   36080 stop.go:39] StopHost: ha-597780-m02
	I0814 16:30:33.356244   36080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:30:33.356288   36080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:30:33.371438   36080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0814 16:30:33.371877   36080 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:30:33.372449   36080 main.go:141] libmachine: Using API Version  1
	I0814 16:30:33.372478   36080 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:30:33.372805   36080 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:30:33.375634   36080 out.go:177] * Stopping node "ha-597780-m02"  ...
	I0814 16:30:33.377154   36080 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 16:30:33.377195   36080 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:30:33.377466   36080 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 16:30:33.377519   36080 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:30:33.380862   36080 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:30:33.381247   36080 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:30:33.381289   36080 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:30:33.381458   36080 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:30:33.381662   36080 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:30:33.381833   36080 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:30:33.382049   36080 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	I0814 16:30:33.468126   36080 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 16:30:33.521922   36080 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 16:30:33.578454   36080 main.go:141] libmachine: Stopping "ha-597780-m02"...
	I0814 16:30:33.578484   36080 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:30:33.580328   36080 main.go:141] libmachine: (ha-597780-m02) Calling .Stop
	I0814 16:30:33.584499   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 0/120
	I0814 16:30:34.586632   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 1/120
	I0814 16:30:35.587885   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 2/120
	I0814 16:30:36.589900   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 3/120
	I0814 16:30:37.591087   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 4/120
	I0814 16:30:38.593359   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 5/120
	I0814 16:30:39.595198   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 6/120
	I0814 16:30:40.596570   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 7/120
	I0814 16:30:41.598001   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 8/120
	I0814 16:30:42.599436   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 9/120
	I0814 16:30:43.601578   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 10/120
	I0814 16:30:44.602903   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 11/120
	I0814 16:30:45.604286   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 12/120
	I0814 16:30:46.605736   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 13/120
	I0814 16:30:47.607118   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 14/120
	I0814 16:30:48.609140   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 15/120
	I0814 16:30:49.610560   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 16/120
	I0814 16:30:50.612177   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 17/120
	I0814 16:30:51.613631   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 18/120
	I0814 16:30:52.615223   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 19/120
	I0814 16:30:53.617319   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 20/120
	I0814 16:30:54.618588   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 21/120
	I0814 16:30:55.619978   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 22/120
	I0814 16:30:56.621165   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 23/120
	I0814 16:30:57.622491   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 24/120
	I0814 16:30:58.624392   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 25/120
	I0814 16:30:59.625719   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 26/120
	I0814 16:31:00.627131   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 27/120
	I0814 16:31:01.628451   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 28/120
	I0814 16:31:02.629986   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 29/120
	I0814 16:31:03.632333   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 30/120
	I0814 16:31:04.634525   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 31/120
	I0814 16:31:05.636230   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 32/120
	I0814 16:31:06.638072   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 33/120
	I0814 16:31:07.639314   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 34/120
	I0814 16:31:08.641367   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 35/120
	I0814 16:31:09.642667   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 36/120
	I0814 16:31:10.644011   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 37/120
	I0814 16:31:11.645722   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 38/120
	I0814 16:31:12.647808   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 39/120
	I0814 16:31:13.650073   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 40/120
	I0814 16:31:14.651465   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 41/120
	I0814 16:31:15.652665   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 42/120
	I0814 16:31:16.654098   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 43/120
	I0814 16:31:17.655399   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 44/120
	I0814 16:31:18.657346   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 45/120
	I0814 16:31:19.658832   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 46/120
	I0814 16:31:20.660719   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 47/120
	I0814 16:31:21.662211   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 48/120
	I0814 16:31:22.663732   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 49/120
	I0814 16:31:23.665849   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 50/120
	I0814 16:31:24.668042   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 51/120
	I0814 16:31:25.670152   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 52/120
	I0814 16:31:26.671661   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 53/120
	I0814 16:31:27.673888   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 54/120
	I0814 16:31:28.676219   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 55/120
	I0814 16:31:29.678102   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 56/120
	I0814 16:31:30.679561   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 57/120
	I0814 16:31:31.681642   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 58/120
	I0814 16:31:32.683674   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 59/120
	I0814 16:31:33.685672   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 60/120
	I0814 16:31:34.686961   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 61/120
	I0814 16:31:35.689183   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 62/120
	I0814 16:31:36.690518   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 63/120
	I0814 16:31:37.691898   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 64/120
	I0814 16:31:38.693765   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 65/120
	I0814 16:31:39.696065   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 66/120
	I0814 16:31:40.697694   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 67/120
	I0814 16:31:41.699120   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 68/120
	I0814 16:31:42.700500   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 69/120
	I0814 16:31:43.703043   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 70/120
	I0814 16:31:44.704374   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 71/120
	I0814 16:31:45.705848   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 72/120
	I0814 16:31:46.707228   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 73/120
	I0814 16:31:47.708596   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 74/120
	I0814 16:31:48.710680   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 75/120
	I0814 16:31:49.712996   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 76/120
	I0814 16:31:50.714905   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 77/120
	I0814 16:31:51.716357   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 78/120
	I0814 16:31:52.717964   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 79/120
	I0814 16:31:53.720057   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 80/120
	I0814 16:31:54.722191   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 81/120
	I0814 16:31:55.723663   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 82/120
	I0814 16:31:56.725703   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 83/120
	I0814 16:31:57.727069   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 84/120
	I0814 16:31:58.729096   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 85/120
	I0814 16:31:59.730537   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 86/120
	I0814 16:32:00.731822   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 87/120
	I0814 16:32:01.733279   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 88/120
	I0814 16:32:02.734708   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 89/120
	I0814 16:32:03.736908   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 90/120
	I0814 16:32:04.738307   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 91/120
	I0814 16:32:05.739584   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 92/120
	I0814 16:32:06.740918   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 93/120
	I0814 16:32:07.742650   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 94/120
	I0814 16:32:08.744300   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 95/120
	I0814 16:32:09.745774   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 96/120
	I0814 16:32:10.747030   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 97/120
	I0814 16:32:11.748462   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 98/120
	I0814 16:32:12.749667   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 99/120
	I0814 16:32:13.751819   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 100/120
	I0814 16:32:14.753862   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 101/120
	I0814 16:32:15.755095   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 102/120
	I0814 16:32:16.756362   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 103/120
	I0814 16:32:17.757922   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 104/120
	I0814 16:32:18.759540   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 105/120
	I0814 16:32:19.760964   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 106/120
	I0814 16:32:20.762413   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 107/120
	I0814 16:32:21.763888   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 108/120
	I0814 16:32:22.765854   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 109/120
	I0814 16:32:23.767827   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 110/120
	I0814 16:32:24.769243   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 111/120
	I0814 16:32:25.770709   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 112/120
	I0814 16:32:26.772146   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 113/120
	I0814 16:32:27.773865   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 114/120
	I0814 16:32:28.775583   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 115/120
	I0814 16:32:29.777855   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 116/120
	I0814 16:32:30.779358   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 117/120
	I0814 16:32:31.780972   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 118/120
	I0814 16:32:32.782644   36080 main.go:141] libmachine: (ha-597780-m02) Waiting for machine to stop 119/120
	I0814 16:32:33.783286   36080 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0814 16:32:33.783447   36080 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-597780 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr: exit status 3 (19.191019845s)

                                                
                                                
-- stdout --
	ha-597780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-597780-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:32:33.826643   36493 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:32:33.826744   36493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:32:33.826749   36493 out.go:304] Setting ErrFile to fd 2...
	I0814 16:32:33.826753   36493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:32:33.826905   36493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:32:33.827068   36493 out.go:298] Setting JSON to false
	I0814 16:32:33.827095   36493 mustload.go:65] Loading cluster: ha-597780
	I0814 16:32:33.827127   36493 notify.go:220] Checking for updates...
	I0814 16:32:33.827526   36493 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:32:33.827552   36493 status.go:255] checking status of ha-597780 ...
	I0814 16:32:33.827997   36493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:33.828074   36493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:33.845876   36493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46833
	I0814 16:32:33.846260   36493 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:33.846854   36493 main.go:141] libmachine: Using API Version  1
	I0814 16:32:33.846885   36493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:33.847196   36493 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:33.847400   36493 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:32:33.848936   36493 status.go:330] ha-597780 host status = "Running" (err=<nil>)
	I0814 16:32:33.848953   36493 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:32:33.849292   36493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:33.849350   36493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:33.865329   36493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0814 16:32:33.865728   36493 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:33.866168   36493 main.go:141] libmachine: Using API Version  1
	I0814 16:32:33.866189   36493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:33.866524   36493 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:33.866689   36493 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:32:33.869426   36493 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:32:33.869910   36493 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:32:33.869934   36493 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:32:33.870051   36493 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:32:33.870327   36493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:33.870363   36493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:33.887105   36493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34629
	I0814 16:32:33.887508   36493 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:33.887923   36493 main.go:141] libmachine: Using API Version  1
	I0814 16:32:33.887944   36493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:33.888203   36493 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:33.888393   36493 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:32:33.888592   36493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:32:33.888611   36493 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:32:33.891714   36493 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:32:33.892129   36493 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:32:33.892161   36493 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:32:33.892304   36493 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:32:33.892485   36493 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:32:33.892636   36493 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:32:33.892776   36493 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:32:33.978227   36493 ssh_runner.go:195] Run: systemctl --version
	I0814 16:32:33.985222   36493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:32:34.000735   36493 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:32:34.000765   36493 api_server.go:166] Checking apiserver status ...
	I0814 16:32:34.000804   36493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:32:34.016680   36493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	W0814 16:32:34.026278   36493 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:32:34.026334   36493 ssh_runner.go:195] Run: ls
	I0814 16:32:34.031018   36493 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:32:34.035271   36493 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:32:34.035296   36493 status.go:422] ha-597780 apiserver status = Running (err=<nil>)
	I0814 16:32:34.035306   36493 status.go:257] ha-597780 status: &{Name:ha-597780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:32:34.035321   36493 status.go:255] checking status of ha-597780-m02 ...
	I0814 16:32:34.035609   36493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:34.035646   36493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:34.050386   36493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42893
	I0814 16:32:34.050811   36493 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:34.051253   36493 main.go:141] libmachine: Using API Version  1
	I0814 16:32:34.051279   36493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:34.051609   36493 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:34.051799   36493 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:32:34.053314   36493 status.go:330] ha-597780-m02 host status = "Running" (err=<nil>)
	I0814 16:32:34.053338   36493 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:32:34.053631   36493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:34.053666   36493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:34.067993   36493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46289
	I0814 16:32:34.068352   36493 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:34.068792   36493 main.go:141] libmachine: Using API Version  1
	I0814 16:32:34.068821   36493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:34.069157   36493 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:34.069355   36493 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:32:34.072044   36493 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:32:34.072450   36493 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:32:34.072477   36493 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:32:34.072640   36493 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:32:34.072927   36493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:34.072961   36493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:34.087826   36493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I0814 16:32:34.088269   36493 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:34.088765   36493 main.go:141] libmachine: Using API Version  1
	I0814 16:32:34.088784   36493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:34.089081   36493 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:34.089273   36493 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:32:34.089461   36493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:32:34.089482   36493 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:32:34.092100   36493 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:32:34.092483   36493 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:32:34.092523   36493 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:32:34.092643   36493 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:32:34.092938   36493 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:32:34.093129   36493 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:32:34.093260   36493 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	W0814 16:32:52.603517   36493 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.225:22: connect: no route to host
	W0814 16:32:52.603634   36493 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	E0814 16:32:52.603654   36493 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:32:52.603665   36493 status.go:257] ha-597780-m02 status: &{Name:ha-597780-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0814 16:32:52.603698   36493 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:32:52.603710   36493 status.go:255] checking status of ha-597780-m03 ...
	I0814 16:32:52.604025   36493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:52.604076   36493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:52.620009   36493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0814 16:32:52.620422   36493 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:52.620866   36493 main.go:141] libmachine: Using API Version  1
	I0814 16:32:52.620886   36493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:52.621148   36493 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:52.621325   36493 main.go:141] libmachine: (ha-597780-m03) Calling .GetState
	I0814 16:32:52.622951   36493 status.go:330] ha-597780-m03 host status = "Running" (err=<nil>)
	I0814 16:32:52.622967   36493 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:32:52.623248   36493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:52.623277   36493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:52.637945   36493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45927
	I0814 16:32:52.638317   36493 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:52.638762   36493 main.go:141] libmachine: Using API Version  1
	I0814 16:32:52.638786   36493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:52.639129   36493 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:52.639353   36493 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:32:52.642009   36493 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:32:52.642389   36493 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:32:52.642428   36493 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:32:52.642499   36493 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:32:52.642898   36493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:52.642941   36493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:52.657593   36493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43159
	I0814 16:32:52.658090   36493 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:52.658575   36493 main.go:141] libmachine: Using API Version  1
	I0814 16:32:52.658597   36493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:52.658877   36493 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:52.659027   36493 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:32:52.659221   36493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:32:52.659243   36493 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:32:52.662124   36493 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:32:52.662559   36493 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:32:52.662586   36493 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:32:52.662747   36493 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:32:52.662928   36493 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:32:52.663100   36493 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:32:52.663258   36493 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:32:52.751522   36493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:32:52.773801   36493 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:32:52.773832   36493 api_server.go:166] Checking apiserver status ...
	I0814 16:32:52.773871   36493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:32:52.789858   36493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup
	W0814 16:32:52.798969   36493 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:32:52.799030   36493 ssh_runner.go:195] Run: ls
	I0814 16:32:52.803245   36493 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:32:52.809574   36493 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:32:52.809592   36493 status.go:422] ha-597780-m03 apiserver status = Running (err=<nil>)
	I0814 16:32:52.809601   36493 status.go:257] ha-597780-m03 status: &{Name:ha-597780-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:32:52.809614   36493 status.go:255] checking status of ha-597780-m04 ...
	I0814 16:32:52.809920   36493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:52.809958   36493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:52.824877   36493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0814 16:32:52.825265   36493 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:52.825736   36493 main.go:141] libmachine: Using API Version  1
	I0814 16:32:52.825756   36493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:52.826037   36493 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:52.826249   36493 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:32:52.827881   36493 status.go:330] ha-597780-m04 host status = "Running" (err=<nil>)
	I0814 16:32:52.827900   36493 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:32:52.828304   36493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:52.828375   36493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:52.842904   36493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35651
	I0814 16:32:52.843344   36493 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:52.843792   36493 main.go:141] libmachine: Using API Version  1
	I0814 16:32:52.843813   36493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:52.844093   36493 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:52.844273   36493 main.go:141] libmachine: (ha-597780-m04) Calling .GetIP
	I0814 16:32:52.847176   36493 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:32:52.847612   36493 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:32:52.847656   36493 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:32:52.847813   36493 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:32:52.848096   36493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:52.848145   36493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:52.862831   36493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43817
	I0814 16:32:52.863281   36493 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:52.863742   36493 main.go:141] libmachine: Using API Version  1
	I0814 16:32:52.863762   36493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:52.864043   36493 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:52.864232   36493 main.go:141] libmachine: (ha-597780-m04) Calling .DriverName
	I0814 16:32:52.864433   36493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:32:52.864458   36493 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHHostname
	I0814 16:32:52.867636   36493 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:32:52.868048   36493 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:32:52.868077   36493 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:32:52.868241   36493 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHPort
	I0814 16:32:52.868447   36493 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHKeyPath
	I0814 16:32:52.868626   36493 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHUsername
	I0814 16:32:52.868784   36493 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m04/id_rsa Username:docker}
	I0814 16:32:52.956401   36493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:32:52.973637   36493 status.go:257] ha-597780-m04 status: &{Name:ha-597780-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-597780 -n ha-597780
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-597780 logs -n 25: (1.318457081s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3967682573/001/cp-test_ha-597780-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780:/home/docker/cp-test_ha-597780-m03_ha-597780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780 sudo cat                                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m03_ha-597780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m02:/home/docker/cp-test_ha-597780-m03_ha-597780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m02 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m03_ha-597780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04:/home/docker/cp-test_ha-597780-m03_ha-597780-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m04 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m03_ha-597780-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-597780 cp testdata/cp-test.txt                                                | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3967682573/001/cp-test_ha-597780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780:/home/docker/cp-test_ha-597780-m04_ha-597780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780 sudo cat                                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m04_ha-597780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m02:/home/docker/cp-test_ha-597780-m04_ha-597780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m02 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m04_ha-597780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03:/home/docker/cp-test_ha-597780-m04_ha-597780-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m03 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m04_ha-597780-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-597780 node stop m02 -v=7                                                     | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 16:25:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 16:25:16.550739   31878 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:25:16.550860   31878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:25:16.550870   31878 out.go:304] Setting ErrFile to fd 2...
	I0814 16:25:16.550875   31878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:25:16.551070   31878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:25:16.551704   31878 out.go:298] Setting JSON to false
	I0814 16:25:16.552522   31878 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4061,"bootTime":1723648656,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:25:16.552611   31878 start.go:139] virtualization: kvm guest
	I0814 16:25:16.554763   31878 out.go:177] * [ha-597780] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:25:16.556019   31878 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:25:16.556020   31878 notify.go:220] Checking for updates...
	I0814 16:25:16.558421   31878 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:25:16.559520   31878 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:25:16.560635   31878 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:25:16.561797   31878 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:25:16.562971   31878 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:25:16.564285   31878 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:25:16.597932   31878 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 16:25:16.599009   31878 start.go:297] selected driver: kvm2
	I0814 16:25:16.599021   31878 start.go:901] validating driver "kvm2" against <nil>
	I0814 16:25:16.599032   31878 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:25:16.600027   31878 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:25:16.600112   31878 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 16:25:16.614699   31878 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 16:25:16.614764   31878 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 16:25:16.614967   31878 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:25:16.615009   31878 cni.go:84] Creating CNI manager for ""
	I0814 16:25:16.615018   31878 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0814 16:25:16.615023   31878 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 16:25:16.615081   31878 start.go:340] cluster config:
	{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0814 16:25:16.615167   31878 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:25:16.616850   31878 out.go:177] * Starting "ha-597780" primary control-plane node in "ha-597780" cluster
	I0814 16:25:16.617911   31878 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:25:16.617944   31878 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:25:16.617957   31878 cache.go:56] Caching tarball of preloaded images
	I0814 16:25:16.618047   31878 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 16:25:16.618061   31878 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 16:25:16.618394   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:25:16.618416   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json: {Name:mk4378090493a3a71e7f59c8a9d85581c5cdd67d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:16.618556   31878 start.go:360] acquireMachinesLock for ha-597780: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 16:25:16.618595   31878 start.go:364] duration metric: took 23.753µs to acquireMachinesLock for "ha-597780"
	I0814 16:25:16.618618   31878 start.go:93] Provisioning new machine with config: &{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:25:16.618699   31878 start.go:125] createHost starting for "" (driver="kvm2")
	I0814 16:25:16.620236   31878 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 16:25:16.620378   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:25:16.620425   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:25:16.634691   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
	I0814 16:25:16.635145   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:25:16.635712   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:25:16.635731   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:25:16.636011   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:25:16.636184   31878 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:25:16.636290   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:16.636436   31878 start.go:159] libmachine.API.Create for "ha-597780" (driver="kvm2")
	I0814 16:25:16.636472   31878 client.go:168] LocalClient.Create starting
	I0814 16:25:16.636507   31878 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem
	I0814 16:25:16.636542   31878 main.go:141] libmachine: Decoding PEM data...
	I0814 16:25:16.636559   31878 main.go:141] libmachine: Parsing certificate...
	I0814 16:25:16.636624   31878 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem
	I0814 16:25:16.636658   31878 main.go:141] libmachine: Decoding PEM data...
	I0814 16:25:16.636679   31878 main.go:141] libmachine: Parsing certificate...
	I0814 16:25:16.636704   31878 main.go:141] libmachine: Running pre-create checks...
	I0814 16:25:16.636716   31878 main.go:141] libmachine: (ha-597780) Calling .PreCreateCheck
	I0814 16:25:16.637110   31878 main.go:141] libmachine: (ha-597780) Calling .GetConfigRaw
	I0814 16:25:16.637452   31878 main.go:141] libmachine: Creating machine...
	I0814 16:25:16.637464   31878 main.go:141] libmachine: (ha-597780) Calling .Create
	I0814 16:25:16.637570   31878 main.go:141] libmachine: (ha-597780) Creating KVM machine...
	I0814 16:25:16.638908   31878 main.go:141] libmachine: (ha-597780) DBG | found existing default KVM network
	I0814 16:25:16.639577   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:16.639463   31901 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0814 16:25:16.639613   31878 main.go:141] libmachine: (ha-597780) DBG | created network xml: 
	I0814 16:25:16.639637   31878 main.go:141] libmachine: (ha-597780) DBG | <network>
	I0814 16:25:16.639650   31878 main.go:141] libmachine: (ha-597780) DBG |   <name>mk-ha-597780</name>
	I0814 16:25:16.639684   31878 main.go:141] libmachine: (ha-597780) DBG |   <dns enable='no'/>
	I0814 16:25:16.639698   31878 main.go:141] libmachine: (ha-597780) DBG |   
	I0814 16:25:16.639711   31878 main.go:141] libmachine: (ha-597780) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0814 16:25:16.639718   31878 main.go:141] libmachine: (ha-597780) DBG |     <dhcp>
	I0814 16:25:16.639727   31878 main.go:141] libmachine: (ha-597780) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0814 16:25:16.639737   31878 main.go:141] libmachine: (ha-597780) DBG |     </dhcp>
	I0814 16:25:16.639750   31878 main.go:141] libmachine: (ha-597780) DBG |   </ip>
	I0814 16:25:16.639759   31878 main.go:141] libmachine: (ha-597780) DBG |   
	I0814 16:25:16.639764   31878 main.go:141] libmachine: (ha-597780) DBG | </network>
	I0814 16:25:16.639776   31878 main.go:141] libmachine: (ha-597780) DBG | 
	I0814 16:25:16.644808   31878 main.go:141] libmachine: (ha-597780) DBG | trying to create private KVM network mk-ha-597780 192.168.39.0/24...
	I0814 16:25:16.708926   31878 main.go:141] libmachine: (ha-597780) DBG | private KVM network mk-ha-597780 192.168.39.0/24 created
	I0814 16:25:16.708967   31878 main.go:141] libmachine: (ha-597780) Setting up store path in /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780 ...
	I0814 16:25:16.708983   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:16.708894   31901 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:25:16.709002   31878 main.go:141] libmachine: (ha-597780) Building disk image from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 16:25:16.709027   31878 main.go:141] libmachine: (ha-597780) Downloading /home/jenkins/minikube-integration/19446-13977/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 16:25:16.949606   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:16.949479   31901 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa...
	I0814 16:25:17.134823   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:17.134697   31901 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/ha-597780.rawdisk...
	I0814 16:25:17.134847   31878 main.go:141] libmachine: (ha-597780) DBG | Writing magic tar header
	I0814 16:25:17.134861   31878 main.go:141] libmachine: (ha-597780) DBG | Writing SSH key tar header
	I0814 16:25:17.134872   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:17.134813   31901 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780 ...
	I0814 16:25:17.134887   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780
	I0814 16:25:17.134925   31878 main.go:141] libmachine: (ha-597780) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780 (perms=drwx------)
	I0814 16:25:17.134959   31878 main.go:141] libmachine: (ha-597780) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines (perms=drwxr-xr-x)
	I0814 16:25:17.134977   31878 main.go:141] libmachine: (ha-597780) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube (perms=drwxr-xr-x)
	I0814 16:25:17.134989   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines
	I0814 16:25:17.135014   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:25:17.135027   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977
	I0814 16:25:17.135040   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 16:25:17.135056   31878 main.go:141] libmachine: (ha-597780) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977 (perms=drwxrwxr-x)
	I0814 16:25:17.135068   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home/jenkins
	I0814 16:25:17.135081   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home
	I0814 16:25:17.135093   31878 main.go:141] libmachine: (ha-597780) DBG | Skipping /home - not owner
	I0814 16:25:17.135111   31878 main.go:141] libmachine: (ha-597780) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 16:25:17.135123   31878 main.go:141] libmachine: (ha-597780) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 16:25:17.135135   31878 main.go:141] libmachine: (ha-597780) Creating domain...
	I0814 16:25:17.136109   31878 main.go:141] libmachine: (ha-597780) define libvirt domain using xml: 
	I0814 16:25:17.136137   31878 main.go:141] libmachine: (ha-597780) <domain type='kvm'>
	I0814 16:25:17.136149   31878 main.go:141] libmachine: (ha-597780)   <name>ha-597780</name>
	I0814 16:25:17.136163   31878 main.go:141] libmachine: (ha-597780)   <memory unit='MiB'>2200</memory>
	I0814 16:25:17.136174   31878 main.go:141] libmachine: (ha-597780)   <vcpu>2</vcpu>
	I0814 16:25:17.136196   31878 main.go:141] libmachine: (ha-597780)   <features>
	I0814 16:25:17.136205   31878 main.go:141] libmachine: (ha-597780)     <acpi/>
	I0814 16:25:17.136214   31878 main.go:141] libmachine: (ha-597780)     <apic/>
	I0814 16:25:17.136226   31878 main.go:141] libmachine: (ha-597780)     <pae/>
	I0814 16:25:17.136236   31878 main.go:141] libmachine: (ha-597780)     
	I0814 16:25:17.136247   31878 main.go:141] libmachine: (ha-597780)   </features>
	I0814 16:25:17.136256   31878 main.go:141] libmachine: (ha-597780)   <cpu mode='host-passthrough'>
	I0814 16:25:17.136268   31878 main.go:141] libmachine: (ha-597780)   
	I0814 16:25:17.136277   31878 main.go:141] libmachine: (ha-597780)   </cpu>
	I0814 16:25:17.136286   31878 main.go:141] libmachine: (ha-597780)   <os>
	I0814 16:25:17.136296   31878 main.go:141] libmachine: (ha-597780)     <type>hvm</type>
	I0814 16:25:17.136308   31878 main.go:141] libmachine: (ha-597780)     <boot dev='cdrom'/>
	I0814 16:25:17.136322   31878 main.go:141] libmachine: (ha-597780)     <boot dev='hd'/>
	I0814 16:25:17.136334   31878 main.go:141] libmachine: (ha-597780)     <bootmenu enable='no'/>
	I0814 16:25:17.136342   31878 main.go:141] libmachine: (ha-597780)   </os>
	I0814 16:25:17.136351   31878 main.go:141] libmachine: (ha-597780)   <devices>
	I0814 16:25:17.136361   31878 main.go:141] libmachine: (ha-597780)     <disk type='file' device='cdrom'>
	I0814 16:25:17.136376   31878 main.go:141] libmachine: (ha-597780)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/boot2docker.iso'/>
	I0814 16:25:17.136388   31878 main.go:141] libmachine: (ha-597780)       <target dev='hdc' bus='scsi'/>
	I0814 16:25:17.136401   31878 main.go:141] libmachine: (ha-597780)       <readonly/>
	I0814 16:25:17.136411   31878 main.go:141] libmachine: (ha-597780)     </disk>
	I0814 16:25:17.136422   31878 main.go:141] libmachine: (ha-597780)     <disk type='file' device='disk'>
	I0814 16:25:17.136435   31878 main.go:141] libmachine: (ha-597780)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 16:25:17.136449   31878 main.go:141] libmachine: (ha-597780)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/ha-597780.rawdisk'/>
	I0814 16:25:17.136461   31878 main.go:141] libmachine: (ha-597780)       <target dev='hda' bus='virtio'/>
	I0814 16:25:17.136475   31878 main.go:141] libmachine: (ha-597780)     </disk>
	I0814 16:25:17.136487   31878 main.go:141] libmachine: (ha-597780)     <interface type='network'>
	I0814 16:25:17.136499   31878 main.go:141] libmachine: (ha-597780)       <source network='mk-ha-597780'/>
	I0814 16:25:17.136514   31878 main.go:141] libmachine: (ha-597780)       <model type='virtio'/>
	I0814 16:25:17.136524   31878 main.go:141] libmachine: (ha-597780)     </interface>
	I0814 16:25:17.136532   31878 main.go:141] libmachine: (ha-597780)     <interface type='network'>
	I0814 16:25:17.136546   31878 main.go:141] libmachine: (ha-597780)       <source network='default'/>
	I0814 16:25:17.136558   31878 main.go:141] libmachine: (ha-597780)       <model type='virtio'/>
	I0814 16:25:17.136567   31878 main.go:141] libmachine: (ha-597780)     </interface>
	I0814 16:25:17.136578   31878 main.go:141] libmachine: (ha-597780)     <serial type='pty'>
	I0814 16:25:17.136589   31878 main.go:141] libmachine: (ha-597780)       <target port='0'/>
	I0814 16:25:17.136606   31878 main.go:141] libmachine: (ha-597780)     </serial>
	I0814 16:25:17.136621   31878 main.go:141] libmachine: (ha-597780)     <console type='pty'>
	I0814 16:25:17.136638   31878 main.go:141] libmachine: (ha-597780)       <target type='serial' port='0'/>
	I0814 16:25:17.136657   31878 main.go:141] libmachine: (ha-597780)     </console>
	I0814 16:25:17.136668   31878 main.go:141] libmachine: (ha-597780)     <rng model='virtio'>
	I0814 16:25:17.136679   31878 main.go:141] libmachine: (ha-597780)       <backend model='random'>/dev/random</backend>
	I0814 16:25:17.136689   31878 main.go:141] libmachine: (ha-597780)     </rng>
	I0814 16:25:17.136698   31878 main.go:141] libmachine: (ha-597780)     
	I0814 16:25:17.136734   31878 main.go:141] libmachine: (ha-597780)     
	I0814 16:25:17.136752   31878 main.go:141] libmachine: (ha-597780)   </devices>
	I0814 16:25:17.136815   31878 main.go:141] libmachine: (ha-597780) </domain>
	I0814 16:25:17.136844   31878 main.go:141] libmachine: (ha-597780) 
	I0814 16:25:17.140743   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:f8:cc:9d in network default
	I0814 16:25:17.141203   31878 main.go:141] libmachine: (ha-597780) Ensuring networks are active...
	I0814 16:25:17.141220   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:17.141830   31878 main.go:141] libmachine: (ha-597780) Ensuring network default is active
	I0814 16:25:17.142106   31878 main.go:141] libmachine: (ha-597780) Ensuring network mk-ha-597780 is active
	I0814 16:25:17.142507   31878 main.go:141] libmachine: (ha-597780) Getting domain xml...
	I0814 16:25:17.143143   31878 main.go:141] libmachine: (ha-597780) Creating domain...
	I0814 16:25:18.312528   31878 main.go:141] libmachine: (ha-597780) Waiting to get IP...
	I0814 16:25:18.313190   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:18.313568   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:18.313613   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:18.313562   31901 retry.go:31] will retry after 254.454148ms: waiting for machine to come up
	I0814 16:25:18.570182   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:18.570714   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:18.570755   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:18.570681   31901 retry.go:31] will retry after 324.643085ms: waiting for machine to come up
	I0814 16:25:18.897083   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:18.897461   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:18.897486   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:18.897420   31901 retry.go:31] will retry after 300.449231ms: waiting for machine to come up
	I0814 16:25:19.199898   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:19.200358   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:19.200384   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:19.200310   31901 retry.go:31] will retry after 550.899386ms: waiting for machine to come up
	I0814 16:25:19.752907   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:19.753360   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:19.753387   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:19.753308   31901 retry.go:31] will retry after 582.73846ms: waiting for machine to come up
	I0814 16:25:20.338033   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:20.338395   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:20.338423   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:20.338359   31901 retry.go:31] will retry after 661.209453ms: waiting for machine to come up
	I0814 16:25:21.000973   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:21.001278   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:21.001354   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:21.001255   31901 retry.go:31] will retry after 1.081333112s: waiting for machine to come up
	I0814 16:25:22.084264   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:22.084621   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:22.084680   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:22.084600   31901 retry.go:31] will retry after 1.016377445s: waiting for machine to come up
	I0814 16:25:23.102804   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:23.103343   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:23.103394   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:23.103275   31901 retry.go:31] will retry after 1.402260728s: waiting for machine to come up
	I0814 16:25:24.507776   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:24.508213   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:24.508236   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:24.508172   31901 retry.go:31] will retry after 2.141132665s: waiting for machine to come up
	I0814 16:25:26.650375   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:26.650778   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:26.650805   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:26.650735   31901 retry.go:31] will retry after 2.200155129s: waiting for machine to come up
	I0814 16:25:28.854009   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:28.854327   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:28.854352   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:28.854291   31901 retry.go:31] will retry after 3.179850613s: waiting for machine to come up
	I0814 16:25:32.035100   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:32.035560   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:32.035583   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:32.035512   31901 retry.go:31] will retry after 4.298197863s: waiting for machine to come up
	I0814 16:25:36.338930   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:36.339412   31878 main.go:141] libmachine: (ha-597780) Found IP for machine: 192.168.39.4
	I0814 16:25:36.339429   31878 main.go:141] libmachine: (ha-597780) Reserving static IP address...
	I0814 16:25:36.339441   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has current primary IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:36.339906   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find host DHCP lease matching {name: "ha-597780", mac: "52:54:00:d7:0e:d3", ip: "192.168.39.4"} in network mk-ha-597780
	I0814 16:25:36.412805   31878 main.go:141] libmachine: (ha-597780) DBG | Getting to WaitForSSH function...
	I0814 16:25:36.412831   31878 main.go:141] libmachine: (ha-597780) Reserved static IP address: 192.168.39.4
	I0814 16:25:36.412854   31878 main.go:141] libmachine: (ha-597780) Waiting for SSH to be available...
	I0814 16:25:36.415141   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:36.415495   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780
	I0814 16:25:36.415536   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find defined IP address of network mk-ha-597780 interface with MAC address 52:54:00:d7:0e:d3
	I0814 16:25:36.415684   31878 main.go:141] libmachine: (ha-597780) DBG | Using SSH client type: external
	I0814 16:25:36.415703   31878 main.go:141] libmachine: (ha-597780) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa (-rw-------)
	I0814 16:25:36.415739   31878 main.go:141] libmachine: (ha-597780) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 16:25:36.415757   31878 main.go:141] libmachine: (ha-597780) DBG | About to run SSH command:
	I0814 16:25:36.415770   31878 main.go:141] libmachine: (ha-597780) DBG | exit 0
	I0814 16:25:36.419416   31878 main.go:141] libmachine: (ha-597780) DBG | SSH cmd err, output: exit status 255: 
	I0814 16:25:36.419439   31878 main.go:141] libmachine: (ha-597780) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0814 16:25:36.419448   31878 main.go:141] libmachine: (ha-597780) DBG | command : exit 0
	I0814 16:25:36.419460   31878 main.go:141] libmachine: (ha-597780) DBG | err     : exit status 255
	I0814 16:25:36.419473   31878 main.go:141] libmachine: (ha-597780) DBG | output  : 
	I0814 16:25:39.421510   31878 main.go:141] libmachine: (ha-597780) DBG | Getting to WaitForSSH function...
	I0814 16:25:39.424078   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.424451   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.424521   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.424593   31878 main.go:141] libmachine: (ha-597780) DBG | Using SSH client type: external
	I0814 16:25:39.424644   31878 main.go:141] libmachine: (ha-597780) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa (-rw-------)
	I0814 16:25:39.424673   31878 main.go:141] libmachine: (ha-597780) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 16:25:39.424689   31878 main.go:141] libmachine: (ha-597780) DBG | About to run SSH command:
	I0814 16:25:39.424703   31878 main.go:141] libmachine: (ha-597780) DBG | exit 0
	I0814 16:25:39.547152   31878 main.go:141] libmachine: (ha-597780) DBG | SSH cmd err, output: <nil>: 
	I0814 16:25:39.547435   31878 main.go:141] libmachine: (ha-597780) KVM machine creation complete!
	I0814 16:25:39.547760   31878 main.go:141] libmachine: (ha-597780) Calling .GetConfigRaw
	I0814 16:25:39.548271   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:39.548518   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:39.548681   31878 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 16:25:39.548694   31878 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:25:39.550030   31878 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 16:25:39.550053   31878 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 16:25:39.550061   31878 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 16:25:39.550068   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:39.552399   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.552722   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.552745   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.552887   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:39.553063   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.553209   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.553355   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:39.553488   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:25:39.553719   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:25:39.553731   31878 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 16:25:39.650454   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:25:39.650478   31878 main.go:141] libmachine: Detecting the provisioner...
	I0814 16:25:39.650488   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:39.653338   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.653756   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.653785   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.653914   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:39.654119   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.654246   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.654367   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:39.654518   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:25:39.654731   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:25:39.654754   31878 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 16:25:39.751867   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 16:25:39.751933   31878 main.go:141] libmachine: found compatible host: buildroot
	I0814 16:25:39.751942   31878 main.go:141] libmachine: Provisioning with buildroot...
	I0814 16:25:39.751949   31878 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:25:39.752189   31878 buildroot.go:166] provisioning hostname "ha-597780"
	I0814 16:25:39.752214   31878 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:25:39.752398   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:39.754819   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.755136   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.755162   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.755272   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:39.755528   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.755776   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.755908   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:39.756047   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:25:39.756223   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:25:39.756237   31878 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-597780 && echo "ha-597780" | sudo tee /etc/hostname
	I0814 16:25:39.868750   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-597780
	
	I0814 16:25:39.868781   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:39.871293   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.871681   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.871707   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.871899   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:39.872112   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.872295   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.872448   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:39.872690   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:25:39.872938   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:25:39.872959   31878 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-597780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-597780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-597780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 16:25:39.980882   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:25:39.980908   31878 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 16:25:39.980935   31878 buildroot.go:174] setting up certificates
	I0814 16:25:39.980951   31878 provision.go:84] configureAuth start
	I0814 16:25:39.980962   31878 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:25:39.981243   31878 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:25:39.983763   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.984094   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.984115   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.984260   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:39.986386   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.986692   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.986723   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.986857   31878 provision.go:143] copyHostCerts
	I0814 16:25:39.986891   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:25:39.986925   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 16:25:39.986938   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:25:39.987025   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 16:25:39.987135   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:25:39.987160   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 16:25:39.987169   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:25:39.987209   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 16:25:39.987284   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:25:39.987337   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 16:25:39.987348   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:25:39.987385   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 16:25:39.987460   31878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.ha-597780 san=[127.0.0.1 192.168.39.4 ha-597780 localhost minikube]
	I0814 16:25:40.130425   31878 provision.go:177] copyRemoteCerts
	I0814 16:25:40.130484   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 16:25:40.130507   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:40.133344   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.133638   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.133661   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.133827   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:40.134056   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.134235   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:40.134395   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:25:40.217025   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 16:25:40.217092   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 16:25:40.239452   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 16:25:40.239515   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0814 16:25:40.260864   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 16:25:40.260926   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 16:25:40.282296   31878 provision.go:87] duration metric: took 301.331388ms to configureAuth
	I0814 16:25:40.282331   31878 buildroot.go:189] setting minikube options for container-runtime
	I0814 16:25:40.282512   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:25:40.282579   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:40.285182   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.285501   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.285528   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.285735   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:40.285955   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.286114   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.286213   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:40.286372   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:25:40.286536   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:25:40.286552   31878 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 16:25:40.532377   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 16:25:40.532414   31878 main.go:141] libmachine: Checking connection to Docker...
	I0814 16:25:40.532424   31878 main.go:141] libmachine: (ha-597780) Calling .GetURL
	I0814 16:25:40.533632   31878 main.go:141] libmachine: (ha-597780) DBG | Using libvirt version 6000000
	I0814 16:25:40.535761   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.536096   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.536125   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.536281   31878 main.go:141] libmachine: Docker is up and running!
	I0814 16:25:40.536294   31878 main.go:141] libmachine: Reticulating splines...
	I0814 16:25:40.536309   31878 client.go:171] duration metric: took 23.899827196s to LocalClient.Create
	I0814 16:25:40.536332   31878 start.go:167] duration metric: took 23.899896998s to libmachine.API.Create "ha-597780"
	I0814 16:25:40.536354   31878 start.go:293] postStartSetup for "ha-597780" (driver="kvm2")
	I0814 16:25:40.536366   31878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 16:25:40.536381   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:40.536616   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 16:25:40.536645   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:40.538490   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.538846   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.538882   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.539016   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:40.539227   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.539456   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:40.539620   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:25:40.617331   31878 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 16:25:40.621102   31878 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 16:25:40.621123   31878 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 16:25:40.621189   31878 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 16:25:40.621277   31878 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 16:25:40.621288   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /etc/ssl/certs/211772.pem
	I0814 16:25:40.621420   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 16:25:40.630159   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:25:40.652129   31878 start.go:296] duration metric: took 115.760269ms for postStartSetup
	I0814 16:25:40.652188   31878 main.go:141] libmachine: (ha-597780) Calling .GetConfigRaw
	I0814 16:25:40.652822   31878 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:25:40.655420   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.655762   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.655789   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.656099   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:25:40.656317   31878 start.go:128] duration metric: took 24.037606425s to createHost
	I0814 16:25:40.656344   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:40.658540   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.658909   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.658936   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.659025   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:40.659204   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.659367   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.659508   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:40.659707   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:25:40.659861   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:25:40.659872   31878 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 16:25:40.755816   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723652740.735372497
	
	I0814 16:25:40.755837   31878 fix.go:216] guest clock: 1723652740.735372497
	I0814 16:25:40.755846   31878 fix.go:229] Guest: 2024-08-14 16:25:40.735372497 +0000 UTC Remote: 2024-08-14 16:25:40.656331655 +0000 UTC m=+24.138615915 (delta=79.040842ms)
	I0814 16:25:40.755868   31878 fix.go:200] guest clock delta is within tolerance: 79.040842ms
	I0814 16:25:40.755875   31878 start.go:83] releasing machines lock for "ha-597780", held for 24.137268103s
	I0814 16:25:40.755897   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:40.756161   31878 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:25:40.758861   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.759155   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.759181   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.759371   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:40.759800   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:40.759973   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:40.760051   31878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 16:25:40.760097   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:40.760184   31878 ssh_runner.go:195] Run: cat /version.json
	I0814 16:25:40.760208   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:40.762543   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.762917   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.763034   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.763061   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.763196   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:40.763283   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.763308   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.763387   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.763486   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:40.763566   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:40.763627   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.763720   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:25:40.763762   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:40.763879   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:25:40.836219   31878 ssh_runner.go:195] Run: systemctl --version
	I0814 16:25:40.872803   31878 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 16:25:41.034508   31878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 16:25:41.040114   31878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 16:25:41.040166   31878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:25:41.056211   31878 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 16:25:41.056233   31878 start.go:495] detecting cgroup driver to use...
	I0814 16:25:41.056295   31878 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 16:25:41.073872   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 16:25:41.087841   31878 docker.go:217] disabling cri-docker service (if available) ...
	I0814 16:25:41.087889   31878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 16:25:41.101436   31878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 16:25:41.114647   31878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 16:25:41.242293   31878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 16:25:41.392867   31878 docker.go:233] disabling docker service ...
	I0814 16:25:41.392925   31878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 16:25:41.406539   31878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 16:25:41.418791   31878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 16:25:41.562392   31878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 16:25:41.670141   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 16:25:41.682918   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 16:25:41.699581   31878 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 16:25:41.699640   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.708701   31878 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 16:25:41.708751   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.717814   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.726667   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.735787   31878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 16:25:41.744771   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.753853   31878 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.768967   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.778036   31878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 16:25:41.786623   31878 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 16:25:41.786690   31878 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 16:25:41.798228   31878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 16:25:41.807129   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:25:41.915590   31878 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 16:25:42.044261   31878 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 16:25:42.044324   31878 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 16:25:42.048705   31878 start.go:563] Will wait 60s for crictl version
	I0814 16:25:42.048756   31878 ssh_runner.go:195] Run: which crictl
	I0814 16:25:42.052119   31878 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 16:25:42.088329   31878 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 16:25:42.088395   31878 ssh_runner.go:195] Run: crio --version
	I0814 16:25:42.115989   31878 ssh_runner.go:195] Run: crio --version
	I0814 16:25:42.145294   31878 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 16:25:42.146545   31878 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:25:42.149223   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:42.149538   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:42.149569   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:42.149779   31878 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 16:25:42.153620   31878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:25:42.165730   31878 kubeadm.go:883] updating cluster {Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 16:25:42.165842   31878 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:25:42.165885   31878 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:25:42.200604   31878 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 16:25:42.200693   31878 ssh_runner.go:195] Run: which lz4
	I0814 16:25:42.204297   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0814 16:25:42.204391   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 16:25:42.207994   31878 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 16:25:42.208028   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 16:25:43.394127   31878 crio.go:462] duration metric: took 1.189761448s to copy over tarball
	I0814 16:25:43.394188   31878 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 16:25:45.390027   31878 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.995810476s)
	I0814 16:25:45.390064   31878 crio.go:469] duration metric: took 1.995914579s to extract the tarball
	I0814 16:25:45.390071   31878 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 16:25:45.427467   31878 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:25:45.470088   31878 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 16:25:45.470110   31878 cache_images.go:84] Images are preloaded, skipping loading
	I0814 16:25:45.470118   31878 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.31.0 crio true true} ...
	I0814 16:25:45.470219   31878 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-597780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 16:25:45.470278   31878 ssh_runner.go:195] Run: crio config
	I0814 16:25:45.515075   31878 cni.go:84] Creating CNI manager for ""
	I0814 16:25:45.515094   31878 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0814 16:25:45.515102   31878 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 16:25:45.515144   31878 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-597780 NodeName:ha-597780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 16:25:45.515274   31878 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-597780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 16:25:45.515297   31878 kube-vip.go:115] generating kube-vip config ...
	I0814 16:25:45.515353   31878 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0814 16:25:45.530503   31878 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0814 16:25:45.530621   31878 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0814 16:25:45.530694   31878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 16:25:45.539737   31878 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 16:25:45.539806   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0814 16:25:45.548183   31878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0814 16:25:45.563371   31878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 16:25:45.578355   31878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0814 16:25:45.593987   31878 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0814 16:25:45.609843   31878 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0814 16:25:45.613628   31878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:25:45.624434   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:25:45.750267   31878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:25:45.765376   31878 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780 for IP: 192.168.39.4
	I0814 16:25:45.765401   31878 certs.go:194] generating shared ca certs ...
	I0814 16:25:45.765423   31878 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:45.765631   31878 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 16:25:45.765685   31878 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 16:25:45.765699   31878 certs.go:256] generating profile certs ...
	I0814 16:25:45.765763   31878 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key
	I0814 16:25:45.765789   31878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.crt with IP's: []
	I0814 16:25:45.882404   31878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.crt ...
	I0814 16:25:45.882431   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.crt: {Name:mk5c5a98085888ca6febc66415d437d0012bb40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:45.882602   31878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key ...
	I0814 16:25:45.882614   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key: {Name:mk7da86224abddf18d89cfe84fa53bc6be9a481f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:45.882687   31878 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.532024e0
	I0814 16:25:45.882707   31878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.532024e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.254]
	I0814 16:25:46.097370   31878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.532024e0 ...
	I0814 16:25:46.097399   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.532024e0: {Name:mk68b70a36dbd806aacd25471a1104371a586b45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:46.097552   31878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.532024e0 ...
	I0814 16:25:46.097565   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.532024e0: {Name:mk0d223519a26ba2f37b494273f30644ffa08449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:46.097632   31878 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.532024e0 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt
	I0814 16:25:46.097718   31878 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.532024e0 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key
	I0814 16:25:46.097771   31878 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key
	I0814 16:25:46.097786   31878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt with IP's: []
	I0814 16:25:46.205695   31878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt ...
	I0814 16:25:46.205725   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt: {Name:mkdf9a77f4c8f8d2c0e1538b16a9760abb4ed441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:46.205897   31878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key ...
	I0814 16:25:46.205910   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key: {Name:mk17bddeba50b7cc1228cf21c55462eb62fa48ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:46.205997   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 16:25:46.206016   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 16:25:46.206032   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 16:25:46.206057   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 16:25:46.206079   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 16:25:46.206096   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 16:25:46.206111   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 16:25:46.206125   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 16:25:46.206176   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 16:25:46.206220   31878 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 16:25:46.206228   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 16:25:46.206254   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 16:25:46.206279   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 16:25:46.206310   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 16:25:46.206351   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:25:46.206382   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem -> /usr/share/ca-certificates/21177.pem
	I0814 16:25:46.206398   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /usr/share/ca-certificates/211772.pem
	I0814 16:25:46.206413   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:25:46.206977   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 16:25:46.230888   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 16:25:46.252235   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 16:25:46.273903   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 16:25:46.294762   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 16:25:46.316062   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 16:25:46.337249   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 16:25:46.358453   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 16:25:46.380800   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 16:25:46.403386   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 16:25:46.431042   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 16:25:46.458718   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 16:25:46.475992   31878 ssh_runner.go:195] Run: openssl version
	I0814 16:25:46.481464   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 16:25:46.491579   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 16:25:46.495749   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 16:25:46.495791   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 16:25:46.501365   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 16:25:46.511290   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 16:25:46.526773   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 16:25:46.532476   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 16:25:46.532540   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 16:25:46.540197   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 16:25:46.552276   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 16:25:46.566795   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:25:46.571978   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:25:46.572021   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:25:46.577549   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 16:25:46.592133   31878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 16:25:46.596232   31878 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 16:25:46.596288   31878 kubeadm.go:392] StartCluster: {Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:25:46.596374   31878 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 16:25:46.596429   31878 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 16:25:46.630026   31878 cri.go:89] found id: ""
	I0814 16:25:46.630109   31878 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 16:25:46.639521   31878 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 16:25:46.648484   31878 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 16:25:46.656920   31878 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 16:25:46.656939   31878 kubeadm.go:157] found existing configuration files:
	
	I0814 16:25:46.656989   31878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 16:25:46.665224   31878 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 16:25:46.665273   31878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 16:25:46.673373   31878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 16:25:46.681173   31878 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 16:25:46.681232   31878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 16:25:46.689378   31878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 16:25:46.697165   31878 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 16:25:46.697211   31878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 16:25:46.705214   31878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 16:25:46.712909   31878 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 16:25:46.712967   31878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 16:25:46.721049   31878 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 16:25:46.803347   31878 kubeadm.go:310] W0814 16:25:46.788453     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 16:25:46.804143   31878 kubeadm.go:310] W0814 16:25:46.789487     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 16:25:46.906447   31878 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 16:26:00.479126   31878 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 16:26:00.479193   31878 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 16:26:00.479275   31878 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 16:26:00.479406   31878 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 16:26:00.479551   31878 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 16:26:00.479650   31878 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 16:26:00.481173   31878 out.go:204]   - Generating certificates and keys ...
	I0814 16:26:00.481261   31878 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 16:26:00.481333   31878 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 16:26:00.481418   31878 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 16:26:00.481493   31878 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 16:26:00.481581   31878 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 16:26:00.481651   31878 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 16:26:00.481714   31878 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 16:26:00.481865   31878 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-597780 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0814 16:26:00.481949   31878 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 16:26:00.482057   31878 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-597780 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0814 16:26:00.482134   31878 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 16:26:00.482227   31878 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 16:26:00.482323   31878 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 16:26:00.482408   31878 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 16:26:00.482488   31878 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 16:26:00.482578   31878 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 16:26:00.482623   31878 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 16:26:00.482707   31878 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 16:26:00.482791   31878 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 16:26:00.482872   31878 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 16:26:00.482952   31878 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 16:26:00.484291   31878 out.go:204]   - Booting up control plane ...
	I0814 16:26:00.484406   31878 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 16:26:00.484484   31878 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 16:26:00.484558   31878 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 16:26:00.484671   31878 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 16:26:00.484795   31878 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 16:26:00.484872   31878 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 16:26:00.484988   31878 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 16:26:00.485106   31878 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 16:26:00.485161   31878 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.682567ms
	I0814 16:26:00.485232   31878 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 16:26:00.485327   31878 kubeadm.go:310] [api-check] The API server is healthy after 9.053757546s
	I0814 16:26:00.485475   31878 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 16:26:00.485659   31878 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 16:26:00.485712   31878 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 16:26:00.485922   31878 kubeadm.go:310] [mark-control-plane] Marking the node ha-597780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 16:26:00.486002   31878 kubeadm.go:310] [bootstrap-token] Using token: 3teiyp.0zkkksy6kl58w9xk
	I0814 16:26:00.487553   31878 out.go:204]   - Configuring RBAC rules ...
	I0814 16:26:00.487681   31878 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 16:26:00.487759   31878 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 16:26:00.487925   31878 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 16:26:00.488072   31878 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 16:26:00.488226   31878 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 16:26:00.488363   31878 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 16:26:00.488496   31878 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 16:26:00.488601   31878 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 16:26:00.488665   31878 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 16:26:00.488675   31878 kubeadm.go:310] 
	I0814 16:26:00.488748   31878 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 16:26:00.488758   31878 kubeadm.go:310] 
	I0814 16:26:00.488854   31878 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 16:26:00.488863   31878 kubeadm.go:310] 
	I0814 16:26:00.488899   31878 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 16:26:00.488974   31878 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 16:26:00.489049   31878 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 16:26:00.489056   31878 kubeadm.go:310] 
	I0814 16:26:00.489134   31878 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 16:26:00.489143   31878 kubeadm.go:310] 
	I0814 16:26:00.489206   31878 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 16:26:00.489226   31878 kubeadm.go:310] 
	I0814 16:26:00.489287   31878 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 16:26:00.489438   31878 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 16:26:00.489542   31878 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 16:26:00.489551   31878 kubeadm.go:310] 
	I0814 16:26:00.489652   31878 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 16:26:00.489718   31878 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 16:26:00.489725   31878 kubeadm.go:310] 
	I0814 16:26:00.489843   31878 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3teiyp.0zkkksy6kl58w9xk \
	I0814 16:26:00.489993   31878 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 16:26:00.490025   31878 kubeadm.go:310] 	--control-plane 
	I0814 16:26:00.490030   31878 kubeadm.go:310] 
	I0814 16:26:00.490144   31878 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 16:26:00.490156   31878 kubeadm.go:310] 
	I0814 16:26:00.490265   31878 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3teiyp.0zkkksy6kl58w9xk \
	I0814 16:26:00.490451   31878 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 16:26:00.490469   31878 cni.go:84] Creating CNI manager for ""
	I0814 16:26:00.490474   31878 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0814 16:26:00.492190   31878 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 16:26:00.493484   31878 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0814 16:26:00.498701   31878 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0814 16:26:00.498717   31878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0814 16:26:00.514828   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 16:26:00.910545   31878 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 16:26:00.910633   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:26:00.910644   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-597780 minikube.k8s.io/updated_at=2024_08_14T16_26_00_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=ha-597780 minikube.k8s.io/primary=true
	I0814 16:26:01.073531   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:26:01.117729   31878 ops.go:34] apiserver oom_adj: -16
	I0814 16:26:01.574464   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:26:01.671358   31878 kubeadm.go:1113] duration metric: took 760.793717ms to wait for elevateKubeSystemPrivileges
	I0814 16:26:01.671403   31878 kubeadm.go:394] duration metric: took 15.075119104s to StartCluster
	I0814 16:26:01.671425   31878 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:26:01.671514   31878 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:26:01.672172   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:26:01.672419   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 16:26:01.672425   31878 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:26:01.672451   31878 start.go:241] waiting for startup goroutines ...
	I0814 16:26:01.672471   31878 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 16:26:01.672540   31878 addons.go:69] Setting storage-provisioner=true in profile "ha-597780"
	I0814 16:26:01.672549   31878 addons.go:69] Setting default-storageclass=true in profile "ha-597780"
	I0814 16:26:01.672570   31878 addons.go:234] Setting addon storage-provisioner=true in "ha-597780"
	I0814 16:26:01.672575   31878 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-597780"
	I0814 16:26:01.672600   31878 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:26:01.672631   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:26:01.673005   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:01.673021   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:01.673042   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:01.673052   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:01.687831   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0814 16:26:01.688157   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I0814 16:26:01.688289   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:01.688603   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:01.688835   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:01.688858   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:01.689098   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:01.689126   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:01.689251   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:01.689441   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:01.689622   31878 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:26:01.689886   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:01.689922   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:01.691907   31878 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:26:01.692231   31878 kapi.go:59] client config for ha-597780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key", CAFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f170c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 16:26:01.692696   31878 cert_rotation.go:140] Starting client certificate rotation controller
	I0814 16:26:01.692964   31878 addons.go:234] Setting addon default-storageclass=true in "ha-597780"
	I0814 16:26:01.693003   31878 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:26:01.693357   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:01.693387   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:01.705150   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0814 16:26:01.705670   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:01.706203   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:01.706230   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:01.706536   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:01.706754   31878 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:26:01.708441   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:26:01.708491   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33399
	I0814 16:26:01.708865   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:01.709542   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:01.709557   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:01.709849   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:01.710283   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:01.710296   31878 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 16:26:01.710323   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:01.711487   31878 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 16:26:01.711506   31878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 16:26:01.711524   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:26:01.714640   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:26:01.715112   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:26:01.715133   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:26:01.715305   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:26:01.715482   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:26:01.715667   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:26:01.715841   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:26:01.725065   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45303
	I0814 16:26:01.725432   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:01.725858   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:01.725877   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:01.726170   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:01.726366   31878 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:26:01.727693   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:26:01.727908   31878 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 16:26:01.727922   31878 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 16:26:01.727934   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:26:01.730593   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:26:01.730926   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:26:01.730953   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:26:01.731078   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:26:01.731218   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:26:01.731391   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:26:01.731504   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:26:01.798415   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 16:26:01.816423   31878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 16:26:01.847088   31878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 16:26:02.254415   31878 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0814 16:26:02.499957   31878 main.go:141] libmachine: Making call to close driver server
	I0814 16:26:02.499980   31878 main.go:141] libmachine: (ha-597780) Calling .Close
	I0814 16:26:02.500014   31878 main.go:141] libmachine: Making call to close driver server
	I0814 16:26:02.500073   31878 main.go:141] libmachine: (ha-597780) Calling .Close
	I0814 16:26:02.500277   31878 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:26:02.500335   31878 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:26:02.500359   31878 main.go:141] libmachine: Making call to close driver server
	I0814 16:26:02.500385   31878 main.go:141] libmachine: (ha-597780) Calling .Close
	I0814 16:26:02.500389   31878 main.go:141] libmachine: (ha-597780) DBG | Closing plugin on server side
	I0814 16:26:02.500384   31878 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:26:02.500418   31878 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:26:02.500435   31878 main.go:141] libmachine: Making call to close driver server
	I0814 16:26:02.500447   31878 main.go:141] libmachine: (ha-597780) Calling .Close
	I0814 16:26:02.500616   31878 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:26:02.500635   31878 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:26:02.500684   31878 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:26:02.500699   31878 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:26:02.500706   31878 main.go:141] libmachine: (ha-597780) DBG | Closing plugin on server side
	I0814 16:26:02.500750   31878 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0814 16:26:02.500770   31878 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0814 16:26:02.500855   31878 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0814 16:26:02.500866   31878 round_trippers.go:469] Request Headers:
	I0814 16:26:02.500877   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:26:02.500887   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:26:02.511730   31878 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0814 16:26:02.512383   31878 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0814 16:26:02.512400   31878 round_trippers.go:469] Request Headers:
	I0814 16:26:02.512408   31878 round_trippers.go:473]     Content-Type: application/json
	I0814 16:26:02.512415   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:26:02.512422   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:26:02.517415   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:26:02.517567   31878 main.go:141] libmachine: Making call to close driver server
	I0814 16:26:02.517578   31878 main.go:141] libmachine: (ha-597780) Calling .Close
	I0814 16:26:02.517897   31878 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:26:02.517923   31878 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:26:02.517934   31878 main.go:141] libmachine: (ha-597780) DBG | Closing plugin on server side
	I0814 16:26:02.519809   31878 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0814 16:26:02.521080   31878 addons.go:510] duration metric: took 848.618532ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0814 16:26:02.521109   31878 start.go:246] waiting for cluster config update ...
	I0814 16:26:02.521119   31878 start.go:255] writing updated cluster config ...
	I0814 16:26:02.522687   31878 out.go:177] 
	I0814 16:26:02.524065   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:26:02.524136   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:26:02.525889   31878 out.go:177] * Starting "ha-597780-m02" control-plane node in "ha-597780" cluster
	I0814 16:26:02.527045   31878 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:26:02.527072   31878 cache.go:56] Caching tarball of preloaded images
	I0814 16:26:02.527169   31878 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 16:26:02.527182   31878 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 16:26:02.527277   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:26:02.527545   31878 start.go:360] acquireMachinesLock for ha-597780-m02: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 16:26:02.527617   31878 start.go:364] duration metric: took 50.662µs to acquireMachinesLock for "ha-597780-m02"
	I0814 16:26:02.527642   31878 start.go:93] Provisioning new machine with config: &{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:26:02.527782   31878 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0814 16:26:02.530447   31878 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 16:26:02.530557   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:02.530587   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:02.545661   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44331
	I0814 16:26:02.546072   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:02.546637   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:02.546664   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:02.547063   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:02.547287   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetMachineName
	I0814 16:26:02.547456   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:02.547608   31878 start.go:159] libmachine.API.Create for "ha-597780" (driver="kvm2")
	I0814 16:26:02.547630   31878 client.go:168] LocalClient.Create starting
	I0814 16:26:02.547671   31878 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem
	I0814 16:26:02.547710   31878 main.go:141] libmachine: Decoding PEM data...
	I0814 16:26:02.547726   31878 main.go:141] libmachine: Parsing certificate...
	I0814 16:26:02.547776   31878 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem
	I0814 16:26:02.547794   31878 main.go:141] libmachine: Decoding PEM data...
	I0814 16:26:02.547806   31878 main.go:141] libmachine: Parsing certificate...
	I0814 16:26:02.547822   31878 main.go:141] libmachine: Running pre-create checks...
	I0814 16:26:02.547830   31878 main.go:141] libmachine: (ha-597780-m02) Calling .PreCreateCheck
	I0814 16:26:02.547987   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetConfigRaw
	I0814 16:26:02.548414   31878 main.go:141] libmachine: Creating machine...
	I0814 16:26:02.548428   31878 main.go:141] libmachine: (ha-597780-m02) Calling .Create
	I0814 16:26:02.548567   31878 main.go:141] libmachine: (ha-597780-m02) Creating KVM machine...
	I0814 16:26:02.549806   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found existing default KVM network
	I0814 16:26:02.549997   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found existing private KVM network mk-ha-597780
	I0814 16:26:02.550165   31878 main.go:141] libmachine: (ha-597780-m02) Setting up store path in /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02 ...
	I0814 16:26:02.550189   31878 main.go:141] libmachine: (ha-597780-m02) Building disk image from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 16:26:02.550255   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:02.550151   32270 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:26:02.550367   31878 main.go:141] libmachine: (ha-597780-m02) Downloading /home/jenkins/minikube-integration/19446-13977/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 16:26:02.783188   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:02.783060   32270 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa...
	I0814 16:26:03.055543   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:03.055379   32270 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/ha-597780-m02.rawdisk...
	I0814 16:26:03.055587   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Writing magic tar header
	I0814 16:26:03.055602   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Writing SSH key tar header
	I0814 16:26:03.055611   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:03.055490   32270 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02 ...
	I0814 16:26:03.055621   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02
	I0814 16:26:03.055629   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines
	I0814 16:26:03.055641   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:26:03.055651   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977
	I0814 16:26:03.055662   31878 main.go:141] libmachine: (ha-597780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02 (perms=drwx------)
	I0814 16:26:03.055676   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 16:26:03.055691   31878 main.go:141] libmachine: (ha-597780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines (perms=drwxr-xr-x)
	I0814 16:26:03.055704   31878 main.go:141] libmachine: (ha-597780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube (perms=drwxr-xr-x)
	I0814 16:26:03.055711   31878 main.go:141] libmachine: (ha-597780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977 (perms=drwxrwxr-x)
	I0814 16:26:03.055720   31878 main.go:141] libmachine: (ha-597780-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 16:26:03.055727   31878 main.go:141] libmachine: (ha-597780-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 16:26:03.055737   31878 main.go:141] libmachine: (ha-597780-m02) Creating domain...
	I0814 16:26:03.055755   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home/jenkins
	I0814 16:26:03.055769   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home
	I0814 16:26:03.055780   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Skipping /home - not owner
	I0814 16:26:03.056673   31878 main.go:141] libmachine: (ha-597780-m02) define libvirt domain using xml: 
	I0814 16:26:03.056691   31878 main.go:141] libmachine: (ha-597780-m02) <domain type='kvm'>
	I0814 16:26:03.056699   31878 main.go:141] libmachine: (ha-597780-m02)   <name>ha-597780-m02</name>
	I0814 16:26:03.056704   31878 main.go:141] libmachine: (ha-597780-m02)   <memory unit='MiB'>2200</memory>
	I0814 16:26:03.056710   31878 main.go:141] libmachine: (ha-597780-m02)   <vcpu>2</vcpu>
	I0814 16:26:03.056715   31878 main.go:141] libmachine: (ha-597780-m02)   <features>
	I0814 16:26:03.056720   31878 main.go:141] libmachine: (ha-597780-m02)     <acpi/>
	I0814 16:26:03.056727   31878 main.go:141] libmachine: (ha-597780-m02)     <apic/>
	I0814 16:26:03.056732   31878 main.go:141] libmachine: (ha-597780-m02)     <pae/>
	I0814 16:26:03.056736   31878 main.go:141] libmachine: (ha-597780-m02)     
	I0814 16:26:03.056742   31878 main.go:141] libmachine: (ha-597780-m02)   </features>
	I0814 16:26:03.056750   31878 main.go:141] libmachine: (ha-597780-m02)   <cpu mode='host-passthrough'>
	I0814 16:26:03.056756   31878 main.go:141] libmachine: (ha-597780-m02)   
	I0814 16:26:03.056762   31878 main.go:141] libmachine: (ha-597780-m02)   </cpu>
	I0814 16:26:03.056785   31878 main.go:141] libmachine: (ha-597780-m02)   <os>
	I0814 16:26:03.056800   31878 main.go:141] libmachine: (ha-597780-m02)     <type>hvm</type>
	I0814 16:26:03.056809   31878 main.go:141] libmachine: (ha-597780-m02)     <boot dev='cdrom'/>
	I0814 16:26:03.056814   31878 main.go:141] libmachine: (ha-597780-m02)     <boot dev='hd'/>
	I0814 16:26:03.056823   31878 main.go:141] libmachine: (ha-597780-m02)     <bootmenu enable='no'/>
	I0814 16:26:03.056828   31878 main.go:141] libmachine: (ha-597780-m02)   </os>
	I0814 16:26:03.056839   31878 main.go:141] libmachine: (ha-597780-m02)   <devices>
	I0814 16:26:03.056845   31878 main.go:141] libmachine: (ha-597780-m02)     <disk type='file' device='cdrom'>
	I0814 16:26:03.056856   31878 main.go:141] libmachine: (ha-597780-m02)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/boot2docker.iso'/>
	I0814 16:26:03.056865   31878 main.go:141] libmachine: (ha-597780-m02)       <target dev='hdc' bus='scsi'/>
	I0814 16:26:03.056883   31878 main.go:141] libmachine: (ha-597780-m02)       <readonly/>
	I0814 16:26:03.056899   31878 main.go:141] libmachine: (ha-597780-m02)     </disk>
	I0814 16:26:03.056912   31878 main.go:141] libmachine: (ha-597780-m02)     <disk type='file' device='disk'>
	I0814 16:26:03.056919   31878 main.go:141] libmachine: (ha-597780-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 16:26:03.056934   31878 main.go:141] libmachine: (ha-597780-m02)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/ha-597780-m02.rawdisk'/>
	I0814 16:26:03.056950   31878 main.go:141] libmachine: (ha-597780-m02)       <target dev='hda' bus='virtio'/>
	I0814 16:26:03.056963   31878 main.go:141] libmachine: (ha-597780-m02)     </disk>
	I0814 16:26:03.056976   31878 main.go:141] libmachine: (ha-597780-m02)     <interface type='network'>
	I0814 16:26:03.056989   31878 main.go:141] libmachine: (ha-597780-m02)       <source network='mk-ha-597780'/>
	I0814 16:26:03.057000   31878 main.go:141] libmachine: (ha-597780-m02)       <model type='virtio'/>
	I0814 16:26:03.057011   31878 main.go:141] libmachine: (ha-597780-m02)     </interface>
	I0814 16:26:03.057021   31878 main.go:141] libmachine: (ha-597780-m02)     <interface type='network'>
	I0814 16:26:03.057033   31878 main.go:141] libmachine: (ha-597780-m02)       <source network='default'/>
	I0814 16:26:03.057046   31878 main.go:141] libmachine: (ha-597780-m02)       <model type='virtio'/>
	I0814 16:26:03.057063   31878 main.go:141] libmachine: (ha-597780-m02)     </interface>
	I0814 16:26:03.057079   31878 main.go:141] libmachine: (ha-597780-m02)     <serial type='pty'>
	I0814 16:26:03.057090   31878 main.go:141] libmachine: (ha-597780-m02)       <target port='0'/>
	I0814 16:26:03.057098   31878 main.go:141] libmachine: (ha-597780-m02)     </serial>
	I0814 16:26:03.057108   31878 main.go:141] libmachine: (ha-597780-m02)     <console type='pty'>
	I0814 16:26:03.057119   31878 main.go:141] libmachine: (ha-597780-m02)       <target type='serial' port='0'/>
	I0814 16:26:03.057130   31878 main.go:141] libmachine: (ha-597780-m02)     </console>
	I0814 16:26:03.057140   31878 main.go:141] libmachine: (ha-597780-m02)     <rng model='virtio'>
	I0814 16:26:03.057153   31878 main.go:141] libmachine: (ha-597780-m02)       <backend model='random'>/dev/random</backend>
	I0814 16:26:03.057163   31878 main.go:141] libmachine: (ha-597780-m02)     </rng>
	I0814 16:26:03.057171   31878 main.go:141] libmachine: (ha-597780-m02)     
	I0814 16:26:03.057180   31878 main.go:141] libmachine: (ha-597780-m02)     
	I0814 16:26:03.057192   31878 main.go:141] libmachine: (ha-597780-m02)   </devices>
	I0814 16:26:03.057205   31878 main.go:141] libmachine: (ha-597780-m02) </domain>
	I0814 16:26:03.057219   31878 main.go:141] libmachine: (ha-597780-m02) 
	I0814 16:26:03.064138   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:f2:f7:9d in network default
	I0814 16:26:03.064762   31878 main.go:141] libmachine: (ha-597780-m02) Ensuring networks are active...
	I0814 16:26:03.064786   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:03.065457   31878 main.go:141] libmachine: (ha-597780-m02) Ensuring network default is active
	I0814 16:26:03.065752   31878 main.go:141] libmachine: (ha-597780-m02) Ensuring network mk-ha-597780 is active
	I0814 16:26:03.066114   31878 main.go:141] libmachine: (ha-597780-m02) Getting domain xml...
	I0814 16:26:03.066935   31878 main.go:141] libmachine: (ha-597780-m02) Creating domain...
	I0814 16:26:04.286666   31878 main.go:141] libmachine: (ha-597780-m02) Waiting to get IP...
	I0814 16:26:04.287534   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:04.287963   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:04.288008   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:04.287947   32270 retry.go:31] will retry after 284.974697ms: waiting for machine to come up
	I0814 16:26:04.574439   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:04.574948   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:04.574982   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:04.574905   32270 retry.go:31] will retry after 302.655814ms: waiting for machine to come up
	I0814 16:26:04.879559   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:04.880069   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:04.880095   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:04.880024   32270 retry.go:31] will retry after 418.223326ms: waiting for machine to come up
	I0814 16:26:05.299625   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:05.300130   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:05.300157   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:05.300082   32270 retry.go:31] will retry after 429.163095ms: waiting for machine to come up
	I0814 16:26:05.730403   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:05.730794   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:05.730820   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:05.730767   32270 retry.go:31] will retry after 570.642173ms: waiting for machine to come up
	I0814 16:26:06.303597   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:06.304125   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:06.304152   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:06.304081   32270 retry.go:31] will retry after 714.864202ms: waiting for machine to come up
	I0814 16:26:07.020905   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:07.021301   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:07.021340   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:07.021271   32270 retry.go:31] will retry after 1.021402695s: waiting for machine to come up
	I0814 16:26:08.044492   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:08.045020   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:08.045044   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:08.044979   32270 retry.go:31] will retry after 1.125931245s: waiting for machine to come up
	I0814 16:26:09.172396   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:09.172980   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:09.173010   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:09.172925   32270 retry.go:31] will retry after 1.215910282s: waiting for machine to come up
	I0814 16:26:10.390312   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:10.390900   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:10.390931   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:10.390850   32270 retry.go:31] will retry after 1.997454268s: waiting for machine to come up
	I0814 16:26:12.390167   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:12.390590   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:12.390617   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:12.390553   32270 retry.go:31] will retry after 1.986753055s: waiting for machine to come up
	I0814 16:26:14.379278   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:14.379718   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:14.379749   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:14.379679   32270 retry.go:31] will retry after 2.641653092s: waiting for machine to come up
	I0814 16:26:17.024462   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:17.024995   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:17.025018   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:17.024952   32270 retry.go:31] will retry after 2.84006709s: waiting for machine to come up
	I0814 16:26:19.868041   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:19.868476   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:19.868502   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:19.868432   32270 retry.go:31] will retry after 3.47024794s: waiting for machine to come up
	I0814 16:26:23.340057   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.340526   31878 main.go:141] libmachine: (ha-597780-m02) Found IP for machine: 192.168.39.225
	I0814 16:26:23.340549   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has current primary IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.340556   31878 main.go:141] libmachine: (ha-597780-m02) Reserving static IP address...
	I0814 16:26:23.340995   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find host DHCP lease matching {name: "ha-597780-m02", mac: "52:54:00:a6:ae:4d", ip: "192.168.39.225"} in network mk-ha-597780
	I0814 16:26:23.412027   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Getting to WaitForSSH function...
	I0814 16:26:23.412067   31878 main.go:141] libmachine: (ha-597780-m02) Reserved static IP address: 192.168.39.225
	I0814 16:26:23.412084   31878 main.go:141] libmachine: (ha-597780-m02) Waiting for SSH to be available...
	I0814 16:26:23.414819   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.415353   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:23.415389   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.415464   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Using SSH client type: external
	I0814 16:26:23.415488   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa (-rw-------)
	I0814 16:26:23.415519   31878 main.go:141] libmachine: (ha-597780-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 16:26:23.415536   31878 main.go:141] libmachine: (ha-597780-m02) DBG | About to run SSH command:
	I0814 16:26:23.415548   31878 main.go:141] libmachine: (ha-597780-m02) DBG | exit 0
	I0814 16:26:23.543508   31878 main.go:141] libmachine: (ha-597780-m02) DBG | SSH cmd err, output: <nil>: 
	I0814 16:26:23.543804   31878 main.go:141] libmachine: (ha-597780-m02) KVM machine creation complete!
	I0814 16:26:23.544081   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetConfigRaw
	I0814 16:26:23.544649   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:23.544868   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:23.545013   31878 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 16:26:23.545039   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:26:23.546192   31878 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 16:26:23.546209   31878 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 16:26:23.546217   31878 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 16:26:23.546226   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:23.548633   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.549018   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:23.549048   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.549162   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:23.549343   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.549479   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.549582   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:23.549720   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:26:23.549944   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0814 16:26:23.549955   31878 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 16:26:23.654797   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:26:23.654820   31878 main.go:141] libmachine: Detecting the provisioner...
	I0814 16:26:23.654829   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:23.658593   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.659070   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:23.659100   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.659254   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:23.659459   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.659659   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.659814   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:23.659950   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:26:23.660113   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0814 16:26:23.660122   31878 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 16:26:23.763742   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 16:26:23.763808   31878 main.go:141] libmachine: found compatible host: buildroot
	I0814 16:26:23.763818   31878 main.go:141] libmachine: Provisioning with buildroot...
	I0814 16:26:23.763829   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetMachineName
	I0814 16:26:23.764038   31878 buildroot.go:166] provisioning hostname "ha-597780-m02"
	I0814 16:26:23.764060   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetMachineName
	I0814 16:26:23.764242   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:23.766923   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.767359   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:23.767441   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.767471   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:23.767648   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.767780   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.767883   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:23.768049   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:26:23.768210   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0814 16:26:23.768221   31878 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-597780-m02 && echo "ha-597780-m02" | sudo tee /etc/hostname
	I0814 16:26:23.884232   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-597780-m02
	
	I0814 16:26:23.884256   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:23.887354   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.887725   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:23.887754   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.887986   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:23.888181   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.888400   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.888533   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:23.888694   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:26:23.888855   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0814 16:26:23.888871   31878 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-597780-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-597780-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-597780-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 16:26:23.999352   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:26:23.999387   31878 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 16:26:23.999410   31878 buildroot.go:174] setting up certificates
	I0814 16:26:23.999428   31878 provision.go:84] configureAuth start
	I0814 16:26:23.999448   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetMachineName
	I0814 16:26:23.999743   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:26:24.003017   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.003410   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.003444   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.003644   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:24.006103   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.006490   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.006539   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.006751   31878 provision.go:143] copyHostCerts
	I0814 16:26:24.006781   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:26:24.006821   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 16:26:24.006832   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:26:24.006902   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 16:26:24.006977   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:26:24.006995   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 16:26:24.007001   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:26:24.007025   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 16:26:24.007067   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:26:24.007083   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 16:26:24.007089   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:26:24.007117   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 16:26:24.007169   31878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.ha-597780-m02 san=[127.0.0.1 192.168.39.225 ha-597780-m02 localhost minikube]
	I0814 16:26:24.231041   31878 provision.go:177] copyRemoteCerts
	I0814 16:26:24.231099   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 16:26:24.231121   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:24.233659   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.233972   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.234000   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.234192   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:24.234381   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.234562   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:24.234701   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	I0814 16:26:24.317482   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 16:26:24.317565   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 16:26:24.341676   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 16:26:24.341753   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0814 16:26:24.364442   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 16:26:24.364525   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 16:26:24.385840   31878 provision.go:87] duration metric: took 386.39693ms to configureAuth
	I0814 16:26:24.385866   31878 buildroot.go:189] setting minikube options for container-runtime
	I0814 16:26:24.386068   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:26:24.386144   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:24.389078   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.389379   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.389411   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.389539   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:24.389764   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.389920   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.390034   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:24.390194   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:26:24.390385   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0814 16:26:24.390404   31878 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 16:26:24.649600   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 16:26:24.649624   31878 main.go:141] libmachine: Checking connection to Docker...
	I0814 16:26:24.649633   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetURL
	I0814 16:26:24.651070   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Using libvirt version 6000000
	I0814 16:26:24.653597   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.653953   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.653981   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.654126   31878 main.go:141] libmachine: Docker is up and running!
	I0814 16:26:24.654152   31878 main.go:141] libmachine: Reticulating splines...
	I0814 16:26:24.654174   31878 client.go:171] duration metric: took 22.106515659s to LocalClient.Create
	I0814 16:26:24.654210   31878 start.go:167] duration metric: took 22.106603682s to libmachine.API.Create "ha-597780"
	I0814 16:26:24.654222   31878 start.go:293] postStartSetup for "ha-597780-m02" (driver="kvm2")
	I0814 16:26:24.654237   31878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 16:26:24.654257   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:24.654507   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 16:26:24.654535   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:24.656700   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.657012   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.657044   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.657162   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:24.657333   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.657488   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:24.657704   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	I0814 16:26:24.737221   31878 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 16:26:24.741077   31878 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 16:26:24.741102   31878 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 16:26:24.741166   31878 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 16:26:24.741249   31878 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 16:26:24.741262   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /etc/ssl/certs/211772.pem
	I0814 16:26:24.741367   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 16:26:24.750247   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:26:24.772658   31878 start.go:296] duration metric: took 118.421827ms for postStartSetup
	I0814 16:26:24.772699   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetConfigRaw
	I0814 16:26:24.773303   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:26:24.776032   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.776377   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.776418   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.776612   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:26:24.776786   31878 start.go:128] duration metric: took 22.248990351s to createHost
	I0814 16:26:24.776808   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:24.778808   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.779103   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.779131   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.779232   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:24.779424   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.779643   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.779797   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:24.779964   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:26:24.780190   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0814 16:26:24.780208   31878 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 16:26:24.883637   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723652784.859430769
	
	I0814 16:26:24.883659   31878 fix.go:216] guest clock: 1723652784.859430769
	I0814 16:26:24.883669   31878 fix.go:229] Guest: 2024-08-14 16:26:24.859430769 +0000 UTC Remote: 2024-08-14 16:26:24.776797078 +0000 UTC m=+68.259081330 (delta=82.633691ms)
	I0814 16:26:24.883687   31878 fix.go:200] guest clock delta is within tolerance: 82.633691ms
	I0814 16:26:24.883694   31878 start.go:83] releasing machines lock for "ha-597780-m02", held for 22.356065528s
	I0814 16:26:24.883717   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:24.884003   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:26:24.886630   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.886977   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.887007   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.889527   31878 out.go:177] * Found network options:
	I0814 16:26:24.890898   31878 out.go:177]   - NO_PROXY=192.168.39.4
	W0814 16:26:24.892203   31878 proxy.go:119] fail to check proxy env: Error ip not in block
	I0814 16:26:24.892251   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:24.892770   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:24.892991   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:24.893074   31878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 16:26:24.893118   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	W0814 16:26:24.893204   31878 proxy.go:119] fail to check proxy env: Error ip not in block
	I0814 16:26:24.893275   31878 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 16:26:24.893296   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:24.895815   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.896074   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.896253   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.896282   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.896447   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:24.896561   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.896594   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.896636   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.896754   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:24.896910   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.896912   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:24.897118   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:24.897112   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	I0814 16:26:24.897291   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	I0814 16:26:25.123383   31878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 16:26:25.128927   31878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 16:26:25.128982   31878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:26:25.144488   31878 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 16:26:25.144515   31878 start.go:495] detecting cgroup driver to use...
	I0814 16:26:25.144579   31878 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 16:26:25.161158   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 16:26:25.174850   31878 docker.go:217] disabling cri-docker service (if available) ...
	I0814 16:26:25.174925   31878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 16:26:25.188000   31878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 16:26:25.200663   31878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 16:26:25.309694   31878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 16:26:25.448029   31878 docker.go:233] disabling docker service ...
	I0814 16:26:25.448099   31878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 16:26:25.462055   31878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 16:26:25.474404   31878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 16:26:25.606152   31878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 16:26:25.739595   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 16:26:25.752778   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 16:26:25.772085   31878 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 16:26:25.772151   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.782033   31878 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 16:26:25.782094   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.791657   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.801204   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.811708   31878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 16:26:25.821849   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.833758   31878 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.850966   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.862506   31878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 16:26:25.871925   31878 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 16:26:25.871982   31878 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 16:26:25.883834   31878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 16:26:25.893019   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:26:26.002392   31878 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 16:26:26.134593   31878 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 16:26:26.134672   31878 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 16:26:26.139386   31878 start.go:563] Will wait 60s for crictl version
	I0814 16:26:26.139468   31878 ssh_runner.go:195] Run: which crictl
	I0814 16:26:26.142753   31878 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 16:26:26.179459   31878 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 16:26:26.179557   31878 ssh_runner.go:195] Run: crio --version
	I0814 16:26:26.204792   31878 ssh_runner.go:195] Run: crio --version
	I0814 16:26:26.232170   31878 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 16:26:26.233559   31878 out.go:177]   - env NO_PROXY=192.168.39.4
	I0814 16:26:26.234736   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:26:26.237356   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:26.237735   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:26.237759   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:26.237991   31878 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 16:26:26.241851   31878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:26:26.253196   31878 mustload.go:65] Loading cluster: ha-597780
	I0814 16:26:26.253368   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:26:26.253614   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:26.253648   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:26.269329   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36415
	I0814 16:26:26.269734   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:26.270248   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:26.270272   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:26.270645   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:26.270813   31878 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:26:26.272621   31878 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:26:26.272989   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:26.273013   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:26.287349   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0814 16:26:26.287789   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:26.288195   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:26.288213   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:26.288518   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:26.288717   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:26:26.288862   31878 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780 for IP: 192.168.39.225
	I0814 16:26:26.288871   31878 certs.go:194] generating shared ca certs ...
	I0814 16:26:26.288884   31878 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:26:26.288990   31878 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 16:26:26.289031   31878 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 16:26:26.289040   31878 certs.go:256] generating profile certs ...
	I0814 16:26:26.289116   31878 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key
	I0814 16:26:26.289139   31878 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.4cd622c9
	I0814 16:26:26.289150   31878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.4cd622c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.225 192.168.39.254]
	I0814 16:26:26.631706   31878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.4cd622c9 ...
	I0814 16:26:26.631738   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.4cd622c9: {Name:mk28e0b5520bad73e9acb336a4dd406a300487c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:26:26.631902   31878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.4cd622c9 ...
	I0814 16:26:26.631916   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.4cd622c9: {Name:mk9354ebb43811e70c9c7fd083d8203d518d0483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:26:26.631988   31878 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.4cd622c9 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt
	I0814 16:26:26.632110   31878 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.4cd622c9 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key
	I0814 16:26:26.632230   31878 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key
	I0814 16:26:26.632244   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 16:26:26.632259   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 16:26:26.632273   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 16:26:26.632285   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 16:26:26.632298   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 16:26:26.632311   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 16:26:26.632325   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 16:26:26.632344   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 16:26:26.632393   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 16:26:26.632420   31878 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 16:26:26.632428   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 16:26:26.632448   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 16:26:26.632469   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 16:26:26.632490   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 16:26:26.632524   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:26:26.632549   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:26:26.632563   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem -> /usr/share/ca-certificates/21177.pem
	I0814 16:26:26.632576   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /usr/share/ca-certificates/211772.pem
	I0814 16:26:26.632620   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:26:26.636176   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:26:26.636669   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:26:26.636698   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:26:26.636893   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:26:26.637117   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:26:26.637328   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:26:26.637506   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:26:26.707707   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0814 16:26:26.712554   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0814 16:26:26.723113   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0814 16:26:26.727019   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0814 16:26:26.736104   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0814 16:26:26.739655   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0814 16:26:26.749033   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0814 16:26:26.752517   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0814 16:26:26.761793   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0814 16:26:26.765297   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0814 16:26:26.774268   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0814 16:26:26.777951   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0814 16:26:26.787338   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 16:26:26.811883   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 16:26:26.834838   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 16:26:26.857013   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 16:26:26.879464   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0814 16:26:26.901765   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 16:26:26.924506   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 16:26:26.947630   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 16:26:26.969205   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 16:26:26.991149   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 16:26:27.013345   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 16:26:27.035377   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0814 16:26:27.050880   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0814 16:26:27.066377   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0814 16:26:27.081683   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0814 16:26:27.096857   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0814 16:26:27.112524   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0814 16:26:27.127831   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0814 16:26:27.142895   31878 ssh_runner.go:195] Run: openssl version
	I0814 16:26:27.148302   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 16:26:27.165660   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:26:27.171363   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:26:27.171425   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:26:27.177040   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 16:26:27.187235   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 16:26:27.197276   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 16:26:27.201413   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 16:26:27.201473   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 16:26:27.206740   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 16:26:27.216704   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 16:26:27.226806   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 16:26:27.231042   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 16:26:27.231104   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 16:26:27.236505   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 16:26:27.247039   31878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 16:26:27.250817   31878 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 16:26:27.250866   31878 kubeadm.go:934] updating node {m02 192.168.39.225 8443 v1.31.0 crio true true} ...
	I0814 16:26:27.250943   31878 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-597780-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 16:26:27.250967   31878 kube-vip.go:115] generating kube-vip config ...
	I0814 16:26:27.251000   31878 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0814 16:26:27.268414   31878 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0814 16:26:27.268507   31878 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0814 16:26:27.268578   31878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 16:26:27.278118   31878 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0814 16:26:27.278186   31878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0814 16:26:27.287145   31878 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0814 16:26:27.287172   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0814 16:26:27.287214   31878 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0814 16:26:27.287239   31878 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0814 16:26:27.287243   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0814 16:26:27.291107   31878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0814 16:26:27.291150   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0814 16:27:00.804434   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0814 16:27:00.804518   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0814 16:27:00.809548   31878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0814 16:27:00.809588   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0814 16:27:13.221675   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:27:13.236972   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0814 16:27:13.237073   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0814 16:27:13.241559   31878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0814 16:27:13.241590   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0814 16:27:13.538950   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0814 16:27:13.547991   31878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0814 16:27:13.563482   31878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 16:27:13.579692   31878 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0814 16:27:13.594987   31878 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0814 16:27:13.598421   31878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:27:13.609968   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:27:13.735676   31878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:27:13.751272   31878 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:27:13.751769   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:27:13.751828   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:27:13.768034   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44469
	I0814 16:27:13.768463   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:27:13.769009   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:27:13.769038   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:27:13.769368   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:27:13.769543   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:27:13.769708   31878 start.go:317] joinCluster: &{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:27:13.769820   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0814 16:27:13.769837   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:27:13.772793   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:27:13.773199   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:27:13.773221   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:27:13.773393   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:27:13.773561   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:27:13.773709   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:27:13.773845   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:27:13.920227   31878 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:27:13.920280   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zaslr5.s1i9whjerq2tnrrc --discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-597780-m02 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443"
	I0814 16:27:35.956068   31878 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zaslr5.s1i9whjerq2tnrrc --discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-597780-m02 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443": (22.035764529s)
	I0814 16:27:35.956111   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0814 16:27:36.529697   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-597780-m02 minikube.k8s.io/updated_at=2024_08_14T16_27_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=ha-597780 minikube.k8s.io/primary=false
	I0814 16:27:36.645864   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-597780-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0814 16:27:36.787680   31878 start.go:319] duration metric: took 23.017968041s to joinCluster
	I0814 16:27:36.787754   31878 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:27:36.788078   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:27:36.789164   31878 out.go:177] * Verifying Kubernetes components...
	I0814 16:27:36.790415   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:27:37.054953   31878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:27:37.109578   31878 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:27:37.109807   31878 kapi.go:59] client config for ha-597780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key", CAFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f170c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0814 16:27:37.109861   31878 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.4:8443
	I0814 16:27:37.110035   31878 node_ready.go:35] waiting up to 6m0s for node "ha-597780-m02" to be "Ready" ...
	I0814 16:27:37.110118   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:37.110126   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:37.110132   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:37.110138   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:37.121900   31878 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0814 16:27:37.611026   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:37.611059   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:37.611071   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:37.611077   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:37.614806   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:38.110646   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:38.110665   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:38.110673   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:38.110679   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:38.132949   31878 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0814 16:27:38.610329   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:38.610352   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:38.610360   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:38.610364   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:38.613740   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:39.111009   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:39.111034   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:39.111042   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:39.111048   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:39.114115   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:39.114632   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:39.611071   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:39.611098   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:39.611109   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:39.611114   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:39.614635   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:40.110569   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:40.110604   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:40.110616   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:40.110623   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:40.113861   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:40.610273   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:40.610294   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:40.610302   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:40.610306   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:40.614230   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:41.110371   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:41.110394   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:41.110410   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:41.110415   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:41.113897   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:41.610972   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:41.610996   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:41.611005   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:41.611010   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:41.613977   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:41.614505   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:42.110441   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:42.110468   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:42.110480   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:42.110487   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:42.114186   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:42.610662   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:42.610750   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:42.610765   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:42.610772   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:42.618561   31878 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0814 16:27:43.110582   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:43.110603   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:43.110614   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:43.110618   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:43.115137   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:27:43.611032   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:43.611054   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:43.611062   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:43.611065   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:43.614576   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:43.615136   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:44.111097   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:44.111121   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:44.111133   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:44.111138   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:44.117130   31878 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0814 16:27:44.610994   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:44.611035   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:44.611050   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:44.611055   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:44.614384   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:45.110195   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:45.110217   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:45.110225   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:45.110229   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:45.113002   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:45.610758   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:45.610779   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:45.610787   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:45.610792   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:45.614350   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:46.110258   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:46.110285   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:46.110296   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:46.110300   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:46.113875   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:46.114503   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:46.611108   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:46.611132   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:46.611140   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:46.611143   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:46.614624   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:47.110971   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:47.110995   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:47.111003   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:47.111007   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:47.114336   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:47.610930   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:47.610956   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:47.610964   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:47.610968   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:47.614255   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:48.110683   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:48.110707   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:48.110714   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:48.110720   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:48.114267   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:48.114881   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:48.610255   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:48.610278   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:48.610286   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:48.610292   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:48.613746   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:49.111090   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:49.111110   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:49.111118   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:49.111121   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:49.114348   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:49.611204   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:49.611229   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:49.611238   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:49.611243   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:49.614666   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:50.110595   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:50.110627   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:50.110712   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:50.110741   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:50.114172   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:50.611218   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:50.611243   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:50.611254   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:50.611259   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:50.614323   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:50.614761   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:51.111213   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:51.111233   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:51.111241   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:51.111244   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:51.114272   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:51.610249   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:51.610272   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:51.610280   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:51.610284   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:51.613721   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:52.110987   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:52.111011   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:52.111024   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:52.111029   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:52.115084   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:27:52.610457   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:52.610484   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:52.610496   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:52.610503   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:52.613998   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:53.111009   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:53.111039   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:53.111050   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:53.111055   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:53.114883   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:53.115623   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:53.611121   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:53.611149   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:53.611160   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:53.611166   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:53.614744   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:54.111036   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:54.111063   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.111071   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.111074   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.114369   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:54.610298   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:54.610321   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.610329   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.610334   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.614198   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:54.614633   31878 node_ready.go:49] node "ha-597780-m02" has status "Ready":"True"
	I0814 16:27:54.614651   31878 node_ready.go:38] duration metric: took 17.504589975s for node "ha-597780-m02" to be "Ready" ...
	I0814 16:27:54.614659   31878 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:27:54.614735   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:27:54.614757   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.614764   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.614770   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.618779   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:54.624568   31878 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-28k2m" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.624654   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-28k2m
	I0814 16:27:54.624662   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.624670   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.624674   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.627406   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.628063   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:54.628077   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.628085   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.628088   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.630282   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.630905   31878 pod_ready.go:92] pod "coredns-6f6b679f8f-28k2m" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:54.630925   31878 pod_ready.go:81] duration metric: took 6.334777ms for pod "coredns-6f6b679f8f-28k2m" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.630935   31878 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-kc84b" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.630993   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-kc84b
	I0814 16:27:54.631003   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.631012   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.631019   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.633363   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.633981   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:54.633995   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.634003   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.634007   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.636060   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.636563   31878 pod_ready.go:92] pod "coredns-6f6b679f8f-kc84b" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:54.636577   31878 pod_ready.go:81] duration metric: took 5.635779ms for pod "coredns-6f6b679f8f-kc84b" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.636585   31878 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.636634   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-597780
	I0814 16:27:54.636642   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.636648   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.636651   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.639135   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.639918   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:54.639940   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.639951   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.639956   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.642170   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.642638   31878 pod_ready.go:92] pod "etcd-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:54.642657   31878 pod_ready.go:81] duration metric: took 6.066171ms for pod "etcd-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.642666   31878 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.642718   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-597780-m02
	I0814 16:27:54.642730   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.642739   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.642744   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.644933   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.645402   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:54.645416   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.645426   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.645431   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.647687   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.648147   31878 pod_ready.go:92] pod "etcd-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:54.648163   31878 pod_ready.go:81] duration metric: took 5.490635ms for pod "etcd-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.648178   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.810504   31878 request.go:632] Waited for 162.250358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780
	I0814 16:27:54.810602   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780
	I0814 16:27:54.810609   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.810617   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.810626   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.814213   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:55.011205   31878 request.go:632] Waited for 196.4183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:55.011305   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:55.011315   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:55.011339   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:55.011346   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:55.014514   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:55.015015   31878 pod_ready.go:92] pod "kube-apiserver-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:55.015033   31878 pod_ready.go:81] duration metric: took 366.849185ms for pod "kube-apiserver-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:55.015046   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:55.211178   31878 request.go:632] Waited for 196.066291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780-m02
	I0814 16:27:55.211243   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780-m02
	I0814 16:27:55.211249   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:55.211259   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:55.211265   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:55.214793   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:55.410788   31878 request.go:632] Waited for 195.364944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:55.410852   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:55.410861   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:55.410874   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:55.410883   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:55.413944   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:55.414382   31878 pod_ready.go:92] pod "kube-apiserver-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:55.414403   31878 pod_ready.go:81] duration metric: took 399.349092ms for pod "kube-apiserver-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:55.414413   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:55.610403   31878 request.go:632] Waited for 195.913912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780
	I0814 16:27:55.610464   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780
	I0814 16:27:55.610469   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:55.610477   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:55.610491   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:55.614356   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:55.810463   31878 request.go:632] Waited for 195.275583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:55.810557   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:55.810565   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:55.810574   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:55.810580   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:55.814316   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:55.814901   31878 pod_ready.go:92] pod "kube-controller-manager-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:55.814921   31878 pod_ready.go:81] duration metric: took 400.495173ms for pod "kube-controller-manager-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:55.814931   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:56.011028   31878 request.go:632] Waited for 196.039511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780-m02
	I0814 16:27:56.011114   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780-m02
	I0814 16:27:56.011125   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:56.011137   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:56.011148   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:56.014324   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:56.211256   31878 request.go:632] Waited for 196.346648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:56.211320   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:56.211343   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:56.211355   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:56.211359   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:56.214448   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:56.215132   31878 pod_ready.go:92] pod "kube-controller-manager-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:56.215149   31878 pod_ready.go:81] duration metric: took 400.212519ms for pod "kube-controller-manager-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:56.215158   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4q2dq" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:56.411154   31878 request.go:632] Waited for 195.907518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q2dq
	I0814 16:27:56.411218   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q2dq
	I0814 16:27:56.411226   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:56.411236   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:56.411244   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:56.414675   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:56.610587   31878 request.go:632] Waited for 195.328171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:56.610642   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:56.610647   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:56.610654   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:56.610659   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:56.614199   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:56.614650   31878 pod_ready.go:92] pod "kube-proxy-4q2dq" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:56.614667   31878 pod_ready.go:81] duration metric: took 399.503285ms for pod "kube-proxy-4q2dq" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:56.614677   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-79txl" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:56.811033   31878 request.go:632] Waited for 196.298948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-79txl
	I0814 16:27:56.811111   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-79txl
	I0814 16:27:56.811118   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:56.811126   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:56.811134   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:56.814148   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:57.011077   31878 request.go:632] Waited for 196.348399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:57.011130   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:57.011135   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:57.011143   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:57.011147   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:57.014362   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:57.014957   31878 pod_ready.go:92] pod "kube-proxy-79txl" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:57.014977   31878 pod_ready.go:81] duration metric: took 400.293753ms for pod "kube-proxy-79txl" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:57.014985   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:57.211046   31878 request.go:632] Waited for 196.001751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780
	I0814 16:27:57.211104   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780
	I0814 16:27:57.211111   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:57.211121   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:57.211129   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:57.214469   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:57.410405   31878 request.go:632] Waited for 195.287753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:57.410470   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:57.410475   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:57.410487   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:57.410491   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:57.413675   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:57.414448   31878 pod_ready.go:92] pod "kube-scheduler-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:57.414471   31878 pod_ready.go:81] duration metric: took 399.477679ms for pod "kube-scheduler-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:57.414481   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:57.610903   31878 request.go:632] Waited for 196.365721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780-m02
	I0814 16:27:57.610978   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780-m02
	I0814 16:27:57.610990   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:57.611003   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:57.611011   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:57.614595   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:57.810846   31878 request.go:632] Waited for 195.360436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:57.810900   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:57.810904   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:57.810911   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:57.810915   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:57.814792   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:57.815436   31878 pod_ready.go:92] pod "kube-scheduler-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:57.815455   31878 pod_ready.go:81] duration metric: took 400.968481ms for pod "kube-scheduler-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:57.815466   31878 pod_ready.go:38] duration metric: took 3.20079656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:27:57.815478   31878 api_server.go:52] waiting for apiserver process to appear ...
	I0814 16:27:57.815532   31878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:27:57.830562   31878 api_server.go:72] duration metric: took 21.042773881s to wait for apiserver process to appear ...
	I0814 16:27:57.830587   31878 api_server.go:88] waiting for apiserver healthz status ...
	I0814 16:27:57.830604   31878 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I0814 16:27:57.838936   31878 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I0814 16:27:57.839023   31878 round_trippers.go:463] GET https://192.168.39.4:8443/version
	I0814 16:27:57.839036   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:57.839045   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:57.839050   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:57.839901   31878 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0814 16:27:57.840004   31878 api_server.go:141] control plane version: v1.31.0
	I0814 16:27:57.840019   31878 api_server.go:131] duration metric: took 9.426657ms to wait for apiserver health ...
	I0814 16:27:57.840026   31878 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 16:27:58.010362   31878 request.go:632] Waited for 170.272025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:27:58.010442   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:27:58.010448   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:58.010460   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:58.010467   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:58.014912   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:27:58.020634   31878 system_pods.go:59] 17 kube-system pods found
	I0814 16:27:58.020664   31878 system_pods.go:61] "coredns-6f6b679f8f-28k2m" [ec3725c1-3e21-49b0-9caf-922ef1928ed8] Running
	I0814 16:27:58.020671   31878 system_pods.go:61] "coredns-6f6b679f8f-kc84b" [3a483f17-cab5-4090-abc6-808d84397a8a] Running
	I0814 16:27:58.020678   31878 system_pods.go:61] "etcd-ha-597780" [9af2f660-01fe-499f-902e-4988a5527c5a] Running
	I0814 16:27:58.020684   31878 system_pods.go:61] "etcd-ha-597780-m02" [c811879c-cf46-4c5b-aec2-6fa9aae64d13] Running
	I0814 16:27:58.020688   31878 system_pods.go:61] "kindnet-c8f8r" [b053dfba-820a-416f-9233-ececd7159e1e] Running
	I0814 16:27:58.020691   31878 system_pods.go:61] "kindnet-zm75h" [1e5eabaf-5973-4658-b12b-f7faf67b8af7] Running
	I0814 16:27:58.020694   31878 system_pods.go:61] "kube-apiserver-ha-597780" [8efb614b-9a4f-4029-aba3-e2183fb20627] Running
	I0814 16:27:58.020698   31878 system_pods.go:61] "kube-apiserver-ha-597780-m02" [26d7d4c8-6f40-4217-bf24-f9f94c9f8a79] Running
	I0814 16:27:58.020701   31878 system_pods.go:61] "kube-controller-manager-ha-597780" [ad59b322-ee34-4041-af68-8b5ffcdff9dd] Running
	I0814 16:27:58.020705   31878 system_pods.go:61] "kube-controller-manager-ha-597780-m02" [a25ce1a0-cedb-40cd-ade3-ba63a4b69cd4] Running
	I0814 16:27:58.020709   31878 system_pods.go:61] "kube-proxy-4q2dq" [9e95547c-001c-4942-b160-33e37a389820] Running
	I0814 16:27:58.020715   31878 system_pods.go:61] "kube-proxy-79txl" [ea48ab09-60d5-4133-accc-f3fd69a50c5d] Running
	I0814 16:27:58.020718   31878 system_pods.go:61] "kube-scheduler-ha-597780" [c1576ee1-5aed-4177-b37e-76786ceee1a1] Running
	I0814 16:27:58.020721   31878 system_pods.go:61] "kube-scheduler-ha-597780-m02" [cb250902-8200-423a-8bd3-463aebd7379c] Running
	I0814 16:27:58.020724   31878 system_pods.go:61] "kube-vip-ha-597780" [a5738727-b1a0-4750-9e02-784278225ee4] Running
	I0814 16:27:58.020727   31878 system_pods.go:61] "kube-vip-ha-597780-m02" [c2f92dd8-8248-44a7-bc10-a91546e50eb9] Running
	I0814 16:27:58.020733   31878 system_pods.go:61] "storage-provisioner" [9939439d-cddd-4505-b554-b72f749269fd] Running
	I0814 16:27:58.020738   31878 system_pods.go:74] duration metric: took 180.705381ms to wait for pod list to return data ...
	I0814 16:27:58.020745   31878 default_sa.go:34] waiting for default service account to be created ...
	I0814 16:27:58.211158   31878 request.go:632] Waited for 190.329272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0814 16:27:58.211222   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0814 16:27:58.211227   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:58.211234   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:58.211237   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:58.215157   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:58.215419   31878 default_sa.go:45] found service account: "default"
	I0814 16:27:58.215438   31878 default_sa.go:55] duration metric: took 194.686453ms for default service account to be created ...
	I0814 16:27:58.215452   31878 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 16:27:58.410868   31878 request.go:632] Waited for 195.353496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:27:58.410924   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:27:58.410930   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:58.410938   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:58.410941   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:58.415415   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:27:58.420260   31878 system_pods.go:86] 17 kube-system pods found
	I0814 16:27:58.420285   31878 system_pods.go:89] "coredns-6f6b679f8f-28k2m" [ec3725c1-3e21-49b0-9caf-922ef1928ed8] Running
	I0814 16:27:58.420291   31878 system_pods.go:89] "coredns-6f6b679f8f-kc84b" [3a483f17-cab5-4090-abc6-808d84397a8a] Running
	I0814 16:27:58.420295   31878 system_pods.go:89] "etcd-ha-597780" [9af2f660-01fe-499f-902e-4988a5527c5a] Running
	I0814 16:27:58.420299   31878 system_pods.go:89] "etcd-ha-597780-m02" [c811879c-cf46-4c5b-aec2-6fa9aae64d13] Running
	I0814 16:27:58.420303   31878 system_pods.go:89] "kindnet-c8f8r" [b053dfba-820a-416f-9233-ececd7159e1e] Running
	I0814 16:27:58.420307   31878 system_pods.go:89] "kindnet-zm75h" [1e5eabaf-5973-4658-b12b-f7faf67b8af7] Running
	I0814 16:27:58.420311   31878 system_pods.go:89] "kube-apiserver-ha-597780" [8efb614b-9a4f-4029-aba3-e2183fb20627] Running
	I0814 16:27:58.420316   31878 system_pods.go:89] "kube-apiserver-ha-597780-m02" [26d7d4c8-6f40-4217-bf24-f9f94c9f8a79] Running
	I0814 16:27:58.420322   31878 system_pods.go:89] "kube-controller-manager-ha-597780" [ad59b322-ee34-4041-af68-8b5ffcdff9dd] Running
	I0814 16:27:58.420328   31878 system_pods.go:89] "kube-controller-manager-ha-597780-m02" [a25ce1a0-cedb-40cd-ade3-ba63a4b69cd4] Running
	I0814 16:27:58.420334   31878 system_pods.go:89] "kube-proxy-4q2dq" [9e95547c-001c-4942-b160-33e37a389820] Running
	I0814 16:27:58.420349   31878 system_pods.go:89] "kube-proxy-79txl" [ea48ab09-60d5-4133-accc-f3fd69a50c5d] Running
	I0814 16:27:58.420359   31878 system_pods.go:89] "kube-scheduler-ha-597780" [c1576ee1-5aed-4177-b37e-76786ceee1a1] Running
	I0814 16:27:58.420363   31878 system_pods.go:89] "kube-scheduler-ha-597780-m02" [cb250902-8200-423a-8bd3-463aebd7379c] Running
	I0814 16:27:58.420367   31878 system_pods.go:89] "kube-vip-ha-597780" [a5738727-b1a0-4750-9e02-784278225ee4] Running
	I0814 16:27:58.420371   31878 system_pods.go:89] "kube-vip-ha-597780-m02" [c2f92dd8-8248-44a7-bc10-a91546e50eb9] Running
	I0814 16:27:58.420374   31878 system_pods.go:89] "storage-provisioner" [9939439d-cddd-4505-b554-b72f749269fd] Running
	I0814 16:27:58.420379   31878 system_pods.go:126] duration metric: took 204.92215ms to wait for k8s-apps to be running ...
	I0814 16:27:58.420388   31878 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 16:27:58.420440   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:27:58.436102   31878 system_svc.go:56] duration metric: took 15.704365ms WaitForService to wait for kubelet
	I0814 16:27:58.436138   31878 kubeadm.go:582] duration metric: took 21.648350486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:27:58.436161   31878 node_conditions.go:102] verifying NodePressure condition ...
	I0814 16:27:58.610643   31878 request.go:632] Waited for 174.374721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes
	I0814 16:27:58.610709   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes
	I0814 16:27:58.610716   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:58.610725   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:58.610731   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:58.614510   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:58.615527   31878 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 16:27:58.615554   31878 node_conditions.go:123] node cpu capacity is 2
	I0814 16:27:58.615567   31878 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 16:27:58.615576   31878 node_conditions.go:123] node cpu capacity is 2
	I0814 16:27:58.615582   31878 node_conditions.go:105] duration metric: took 179.415269ms to run NodePressure ...
	I0814 16:27:58.615598   31878 start.go:241] waiting for startup goroutines ...
	I0814 16:27:58.615631   31878 start.go:255] writing updated cluster config ...
	I0814 16:27:58.617709   31878 out.go:177] 
	I0814 16:27:58.619059   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:27:58.619159   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:27:58.620858   31878 out.go:177] * Starting "ha-597780-m03" control-plane node in "ha-597780" cluster
	I0814 16:27:58.621933   31878 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:27:58.621951   31878 cache.go:56] Caching tarball of preloaded images
	I0814 16:27:58.622043   31878 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 16:27:58.622054   31878 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 16:27:58.622132   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:27:58.622289   31878 start.go:360] acquireMachinesLock for ha-597780-m03: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 16:27:58.622326   31878 start.go:364] duration metric: took 20.192µs to acquireMachinesLock for "ha-597780-m03"
	I0814 16:27:58.622344   31878 start.go:93] Provisioning new machine with config: &{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:27:58.622430   31878 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0814 16:27:58.623962   31878 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 16:27:58.624082   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:27:58.624116   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:27:58.639175   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38571
	I0814 16:27:58.639655   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:27:58.640088   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:27:58.640108   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:27:58.640444   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:27:58.640606   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetMachineName
	I0814 16:27:58.640754   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:27:58.640907   31878 start.go:159] libmachine.API.Create for "ha-597780" (driver="kvm2")
	I0814 16:27:58.640932   31878 client.go:168] LocalClient.Create starting
	I0814 16:27:58.640963   31878 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem
	I0814 16:27:58.640993   31878 main.go:141] libmachine: Decoding PEM data...
	I0814 16:27:58.641005   31878 main.go:141] libmachine: Parsing certificate...
	I0814 16:27:58.641050   31878 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem
	I0814 16:27:58.641070   31878 main.go:141] libmachine: Decoding PEM data...
	I0814 16:27:58.641080   31878 main.go:141] libmachine: Parsing certificate...
	I0814 16:27:58.641096   31878 main.go:141] libmachine: Running pre-create checks...
	I0814 16:27:58.641104   31878 main.go:141] libmachine: (ha-597780-m03) Calling .PreCreateCheck
	I0814 16:27:58.641289   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetConfigRaw
	I0814 16:27:58.641688   31878 main.go:141] libmachine: Creating machine...
	I0814 16:27:58.641705   31878 main.go:141] libmachine: (ha-597780-m03) Calling .Create
	I0814 16:27:58.641838   31878 main.go:141] libmachine: (ha-597780-m03) Creating KVM machine...
	I0814 16:27:58.643018   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found existing default KVM network
	I0814 16:27:58.643130   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found existing private KVM network mk-ha-597780
	I0814 16:27:58.643232   31878 main.go:141] libmachine: (ha-597780-m03) Setting up store path in /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03 ...
	I0814 16:27:58.643262   31878 main.go:141] libmachine: (ha-597780-m03) Building disk image from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 16:27:58.643341   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:27:58.643236   32824 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:27:58.643459   31878 main.go:141] libmachine: (ha-597780-m03) Downloading /home/jenkins/minikube-integration/19446-13977/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 16:27:58.873533   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:27:58.873405   32824 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa...
	I0814 16:27:59.244602   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:27:59.244468   32824 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/ha-597780-m03.rawdisk...
	I0814 16:27:59.244636   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Writing magic tar header
	I0814 16:27:59.244655   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Writing SSH key tar header
	I0814 16:27:59.244671   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:27:59.244637   32824 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03 ...
	I0814 16:27:59.244805   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03
	I0814 16:27:59.244831   31878 main.go:141] libmachine: (ha-597780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03 (perms=drwx------)
	I0814 16:27:59.244839   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines
	I0814 16:27:59.244853   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:27:59.244866   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977
	I0814 16:27:59.244882   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 16:27:59.244893   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home/jenkins
	I0814 16:27:59.244906   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home
	I0814 16:27:59.244920   31878 main.go:141] libmachine: (ha-597780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines (perms=drwxr-xr-x)
	I0814 16:27:59.244928   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Skipping /home - not owner
	I0814 16:27:59.244943   31878 main.go:141] libmachine: (ha-597780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube (perms=drwxr-xr-x)
	I0814 16:27:59.244956   31878 main.go:141] libmachine: (ha-597780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977 (perms=drwxrwxr-x)
	I0814 16:27:59.244971   31878 main.go:141] libmachine: (ha-597780-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 16:27:59.244983   31878 main.go:141] libmachine: (ha-597780-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 16:27:59.244996   31878 main.go:141] libmachine: (ha-597780-m03) Creating domain...
	I0814 16:27:59.245921   31878 main.go:141] libmachine: (ha-597780-m03) define libvirt domain using xml: 
	I0814 16:27:59.245940   31878 main.go:141] libmachine: (ha-597780-m03) <domain type='kvm'>
	I0814 16:27:59.245946   31878 main.go:141] libmachine: (ha-597780-m03)   <name>ha-597780-m03</name>
	I0814 16:27:59.245952   31878 main.go:141] libmachine: (ha-597780-m03)   <memory unit='MiB'>2200</memory>
	I0814 16:27:59.245958   31878 main.go:141] libmachine: (ha-597780-m03)   <vcpu>2</vcpu>
	I0814 16:27:59.245966   31878 main.go:141] libmachine: (ha-597780-m03)   <features>
	I0814 16:27:59.245994   31878 main.go:141] libmachine: (ha-597780-m03)     <acpi/>
	I0814 16:27:59.246017   31878 main.go:141] libmachine: (ha-597780-m03)     <apic/>
	I0814 16:27:59.246026   31878 main.go:141] libmachine: (ha-597780-m03)     <pae/>
	I0814 16:27:59.246034   31878 main.go:141] libmachine: (ha-597780-m03)     
	I0814 16:27:59.246046   31878 main.go:141] libmachine: (ha-597780-m03)   </features>
	I0814 16:27:59.246061   31878 main.go:141] libmachine: (ha-597780-m03)   <cpu mode='host-passthrough'>
	I0814 16:27:59.246072   31878 main.go:141] libmachine: (ha-597780-m03)   
	I0814 16:27:59.246083   31878 main.go:141] libmachine: (ha-597780-m03)   </cpu>
	I0814 16:27:59.246117   31878 main.go:141] libmachine: (ha-597780-m03)   <os>
	I0814 16:27:59.246141   31878 main.go:141] libmachine: (ha-597780-m03)     <type>hvm</type>
	I0814 16:27:59.246154   31878 main.go:141] libmachine: (ha-597780-m03)     <boot dev='cdrom'/>
	I0814 16:27:59.246170   31878 main.go:141] libmachine: (ha-597780-m03)     <boot dev='hd'/>
	I0814 16:27:59.246179   31878 main.go:141] libmachine: (ha-597780-m03)     <bootmenu enable='no'/>
	I0814 16:27:59.246186   31878 main.go:141] libmachine: (ha-597780-m03)   </os>
	I0814 16:27:59.246191   31878 main.go:141] libmachine: (ha-597780-m03)   <devices>
	I0814 16:27:59.246198   31878 main.go:141] libmachine: (ha-597780-m03)     <disk type='file' device='cdrom'>
	I0814 16:27:59.246207   31878 main.go:141] libmachine: (ha-597780-m03)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/boot2docker.iso'/>
	I0814 16:27:59.246214   31878 main.go:141] libmachine: (ha-597780-m03)       <target dev='hdc' bus='scsi'/>
	I0814 16:27:59.246225   31878 main.go:141] libmachine: (ha-597780-m03)       <readonly/>
	I0814 16:27:59.246238   31878 main.go:141] libmachine: (ha-597780-m03)     </disk>
	I0814 16:27:59.246251   31878 main.go:141] libmachine: (ha-597780-m03)     <disk type='file' device='disk'>
	I0814 16:27:59.246269   31878 main.go:141] libmachine: (ha-597780-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 16:27:59.246299   31878 main.go:141] libmachine: (ha-597780-m03)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/ha-597780-m03.rawdisk'/>
	I0814 16:27:59.246318   31878 main.go:141] libmachine: (ha-597780-m03)       <target dev='hda' bus='virtio'/>
	I0814 16:27:59.246331   31878 main.go:141] libmachine: (ha-597780-m03)     </disk>
	I0814 16:27:59.246340   31878 main.go:141] libmachine: (ha-597780-m03)     <interface type='network'>
	I0814 16:27:59.246354   31878 main.go:141] libmachine: (ha-597780-m03)       <source network='mk-ha-597780'/>
	I0814 16:27:59.246366   31878 main.go:141] libmachine: (ha-597780-m03)       <model type='virtio'/>
	I0814 16:27:59.246378   31878 main.go:141] libmachine: (ha-597780-m03)     </interface>
	I0814 16:27:59.246393   31878 main.go:141] libmachine: (ha-597780-m03)     <interface type='network'>
	I0814 16:27:59.246413   31878 main.go:141] libmachine: (ha-597780-m03)       <source network='default'/>
	I0814 16:27:59.246424   31878 main.go:141] libmachine: (ha-597780-m03)       <model type='virtio'/>
	I0814 16:27:59.246434   31878 main.go:141] libmachine: (ha-597780-m03)     </interface>
	I0814 16:27:59.246445   31878 main.go:141] libmachine: (ha-597780-m03)     <serial type='pty'>
	I0814 16:27:59.246457   31878 main.go:141] libmachine: (ha-597780-m03)       <target port='0'/>
	I0814 16:27:59.246471   31878 main.go:141] libmachine: (ha-597780-m03)     </serial>
	I0814 16:27:59.246485   31878 main.go:141] libmachine: (ha-597780-m03)     <console type='pty'>
	I0814 16:27:59.246498   31878 main.go:141] libmachine: (ha-597780-m03)       <target type='serial' port='0'/>
	I0814 16:27:59.246507   31878 main.go:141] libmachine: (ha-597780-m03)     </console>
	I0814 16:27:59.246517   31878 main.go:141] libmachine: (ha-597780-m03)     <rng model='virtio'>
	I0814 16:27:59.246535   31878 main.go:141] libmachine: (ha-597780-m03)       <backend model='random'>/dev/random</backend>
	I0814 16:27:59.246553   31878 main.go:141] libmachine: (ha-597780-m03)     </rng>
	I0814 16:27:59.246568   31878 main.go:141] libmachine: (ha-597780-m03)     
	I0814 16:27:59.246585   31878 main.go:141] libmachine: (ha-597780-m03)     
	I0814 16:27:59.246596   31878 main.go:141] libmachine: (ha-597780-m03)   </devices>
	I0814 16:27:59.246604   31878 main.go:141] libmachine: (ha-597780-m03) </domain>
	I0814 16:27:59.246618   31878 main.go:141] libmachine: (ha-597780-m03) 
	I0814 16:27:59.253221   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:ab:73:8c in network default
	I0814 16:27:59.253785   31878 main.go:141] libmachine: (ha-597780-m03) Ensuring networks are active...
	I0814 16:27:59.253807   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:27:59.254373   31878 main.go:141] libmachine: (ha-597780-m03) Ensuring network default is active
	I0814 16:27:59.254656   31878 main.go:141] libmachine: (ha-597780-m03) Ensuring network mk-ha-597780 is active
	I0814 16:27:59.254932   31878 main.go:141] libmachine: (ha-597780-m03) Getting domain xml...
	I0814 16:27:59.255562   31878 main.go:141] libmachine: (ha-597780-m03) Creating domain...
	I0814 16:28:00.490190   31878 main.go:141] libmachine: (ha-597780-m03) Waiting to get IP...
	I0814 16:28:00.491016   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:00.491434   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:00.491492   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:00.491433   32824 retry.go:31] will retry after 215.668377ms: waiting for machine to come up
	I0814 16:28:00.708783   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:00.709192   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:00.709219   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:00.709143   32824 retry.go:31] will retry after 287.449412ms: waiting for machine to come up
	I0814 16:28:00.998673   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:00.999161   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:00.999183   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:00.999112   32824 retry.go:31] will retry after 410.594458ms: waiting for machine to come up
	I0814 16:28:01.411675   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:01.412228   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:01.412254   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:01.412208   32824 retry.go:31] will retry after 440.346851ms: waiting for machine to come up
	I0814 16:28:01.853631   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:01.854118   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:01.854147   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:01.854057   32824 retry.go:31] will retry after 736.037125ms: waiting for machine to come up
	I0814 16:28:02.591534   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:02.591947   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:02.591971   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:02.591908   32824 retry.go:31] will retry after 760.455251ms: waiting for machine to come up
	I0814 16:28:03.353918   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:03.354326   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:03.354353   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:03.354291   32824 retry.go:31] will retry after 734.384806ms: waiting for machine to come up
	I0814 16:28:04.090570   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:04.091017   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:04.091046   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:04.090964   32824 retry.go:31] will retry after 990.16899ms: waiting for machine to come up
	I0814 16:28:05.083166   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:05.083604   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:05.083628   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:05.083577   32824 retry.go:31] will retry after 1.417341163s: waiting for machine to come up
	I0814 16:28:06.502131   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:06.502609   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:06.502655   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:06.502547   32824 retry.go:31] will retry after 2.204940468s: waiting for machine to come up
	I0814 16:28:08.709498   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:08.710102   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:08.710133   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:08.710046   32824 retry.go:31] will retry after 2.739628932s: waiting for machine to come up
	I0814 16:28:11.452942   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:11.453463   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:11.453492   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:11.453418   32824 retry.go:31] will retry after 2.200619257s: waiting for machine to come up
	I0814 16:28:13.655241   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:13.655869   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:13.655894   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:13.655818   32824 retry.go:31] will retry after 3.238883502s: waiting for machine to come up
	I0814 16:28:16.896282   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:16.896766   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:16.896793   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:16.896706   32824 retry.go:31] will retry after 3.559583358s: waiting for machine to come up
	I0814 16:28:20.457259   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.457783   31878 main.go:141] libmachine: (ha-597780-m03) Found IP for machine: 192.168.39.167
	I0814 16:28:20.457809   31878 main.go:141] libmachine: (ha-597780-m03) Reserving static IP address...
	I0814 16:28:20.457822   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has current primary IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.458181   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find host DHCP lease matching {name: "ha-597780-m03", mac: "52:54:00:e0:61:b4", ip: "192.168.39.167"} in network mk-ha-597780
	I0814 16:28:20.530929   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Getting to WaitForSSH function...
	I0814 16:28:20.530964   31878 main.go:141] libmachine: (ha-597780-m03) Reserved static IP address: 192.168.39.167
	I0814 16:28:20.530978   31878 main.go:141] libmachine: (ha-597780-m03) Waiting for SSH to be available...
	I0814 16:28:20.533511   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.533911   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:20.533941   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.534112   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Using SSH client type: external
	I0814 16:28:20.534137   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa (-rw-------)
	I0814 16:28:20.534156   31878 main.go:141] libmachine: (ha-597780-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 16:28:20.534166   31878 main.go:141] libmachine: (ha-597780-m03) DBG | About to run SSH command:
	I0814 16:28:20.534179   31878 main.go:141] libmachine: (ha-597780-m03) DBG | exit 0
	I0814 16:28:20.663661   31878 main.go:141] libmachine: (ha-597780-m03) DBG | SSH cmd err, output: <nil>: 
	I0814 16:28:20.663939   31878 main.go:141] libmachine: (ha-597780-m03) KVM machine creation complete!
	I0814 16:28:20.664255   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetConfigRaw
	I0814 16:28:20.664837   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:20.665037   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:20.665225   31878 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 16:28:20.665238   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetState
	I0814 16:28:20.666554   31878 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 16:28:20.666570   31878 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 16:28:20.666578   31878 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 16:28:20.666586   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:20.668811   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.669189   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:20.669216   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.669346   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:20.669486   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:20.669631   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:20.669762   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:20.669905   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:28:20.670091   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0814 16:28:20.670114   31878 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 16:28:20.778468   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:28:20.778492   31878 main.go:141] libmachine: Detecting the provisioner...
	I0814 16:28:20.778502   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:20.781208   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.781571   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:20.781601   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.781782   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:20.781968   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:20.782124   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:20.782244   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:20.782365   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:28:20.782530   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0814 16:28:20.782540   31878 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 16:28:20.892216   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 16:28:20.892280   31878 main.go:141] libmachine: found compatible host: buildroot
	I0814 16:28:20.892287   31878 main.go:141] libmachine: Provisioning with buildroot...
	I0814 16:28:20.892294   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetMachineName
	I0814 16:28:20.892572   31878 buildroot.go:166] provisioning hostname "ha-597780-m03"
	I0814 16:28:20.892600   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetMachineName
	I0814 16:28:20.892815   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:20.895596   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.896117   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:20.896146   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.896273   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:20.896450   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:20.896615   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:20.896854   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:20.897092   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:28:20.897267   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0814 16:28:20.897283   31878 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-597780-m03 && echo "ha-597780-m03" | sudo tee /etc/hostname
	I0814 16:28:21.020119   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-597780-m03
	
	I0814 16:28:21.020147   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.022784   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.023132   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.023152   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.023349   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.023553   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.023733   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.023897   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.024059   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:28:21.024253   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0814 16:28:21.024277   31878 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-597780-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-597780-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-597780-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 16:28:21.143314   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:28:21.143359   31878 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 16:28:21.143375   31878 buildroot.go:174] setting up certificates
	I0814 16:28:21.143389   31878 provision.go:84] configureAuth start
	I0814 16:28:21.143413   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetMachineName
	I0814 16:28:21.143713   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:28:21.146530   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.146932   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.146971   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.147100   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.149060   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.149339   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.149369   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.149498   31878 provision.go:143] copyHostCerts
	I0814 16:28:21.149522   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:28:21.149556   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 16:28:21.149568   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:28:21.149667   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 16:28:21.149760   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:28:21.149788   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 16:28:21.149799   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:28:21.149836   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 16:28:21.149897   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:28:21.149921   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 16:28:21.149929   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:28:21.149964   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 16:28:21.150287   31878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.ha-597780-m03 san=[127.0.0.1 192.168.39.167 ha-597780-m03 localhost minikube]
	I0814 16:28:21.257447   31878 provision.go:177] copyRemoteCerts
	I0814 16:28:21.257509   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 16:28:21.257542   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.260087   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.260489   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.260516   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.260686   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.260849   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.261017   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.261147   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:28:21.345036   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 16:28:21.345125   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 16:28:21.366773   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 16:28:21.366842   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0814 16:28:21.388396   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 16:28:21.388484   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 16:28:21.409418   31878 provision.go:87] duration metric: took 266.016615ms to configureAuth
	I0814 16:28:21.409449   31878 buildroot.go:189] setting minikube options for container-runtime
	I0814 16:28:21.409684   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:28:21.409765   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.412416   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.412835   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.412861   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.413061   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.413256   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.413408   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.413525   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.413697   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:28:21.413877   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0814 16:28:21.413892   31878 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 16:28:21.677901   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 16:28:21.677938   31878 main.go:141] libmachine: Checking connection to Docker...
	I0814 16:28:21.677946   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetURL
	I0814 16:28:21.679192   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Using libvirt version 6000000
	I0814 16:28:21.681181   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.681521   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.681543   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.681658   31878 main.go:141] libmachine: Docker is up and running!
	I0814 16:28:21.681672   31878 main.go:141] libmachine: Reticulating splines...
	I0814 16:28:21.681680   31878 client.go:171] duration metric: took 23.040737276s to LocalClient.Create
	I0814 16:28:21.681707   31878 start.go:167] duration metric: took 23.040797467s to libmachine.API.Create "ha-597780"
	I0814 16:28:21.681718   31878 start.go:293] postStartSetup for "ha-597780-m03" (driver="kvm2")
	I0814 16:28:21.681731   31878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 16:28:21.681761   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:21.681979   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 16:28:21.682003   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.684060   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.684330   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.684354   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.684492   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.684684   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.684817   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.684951   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:28:21.773408   31878 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 16:28:21.777349   31878 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 16:28:21.777370   31878 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 16:28:21.777444   31878 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 16:28:21.777537   31878 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 16:28:21.777548   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /etc/ssl/certs/211772.pem
	I0814 16:28:21.777653   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 16:28:21.786579   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:28:21.808597   31878 start.go:296] duration metric: took 126.866868ms for postStartSetup
	I0814 16:28:21.808644   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetConfigRaw
	I0814 16:28:21.809206   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:28:21.811918   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.812306   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.812335   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.812655   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:28:21.812852   31878 start.go:128] duration metric: took 23.190411902s to createHost
	I0814 16:28:21.812871   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.815277   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.815654   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.815674   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.815874   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.816060   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.816196   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.816308   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.816442   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:28:21.816653   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0814 16:28:21.816667   31878 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 16:28:21.931715   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723652901.892288502
	
	I0814 16:28:21.931737   31878 fix.go:216] guest clock: 1723652901.892288502
	I0814 16:28:21.931744   31878 fix.go:229] Guest: 2024-08-14 16:28:21.892288502 +0000 UTC Remote: 2024-08-14 16:28:21.812861976 +0000 UTC m=+185.295146227 (delta=79.426526ms)
	I0814 16:28:21.931758   31878 fix.go:200] guest clock delta is within tolerance: 79.426526ms
	I0814 16:28:21.931763   31878 start.go:83] releasing machines lock for "ha-597780-m03", held for 23.309428864s
	I0814 16:28:21.931778   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:21.932009   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:28:21.934743   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.935285   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.935353   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.937363   31878 out.go:177] * Found network options:
	I0814 16:28:21.938795   31878 out.go:177]   - NO_PROXY=192.168.39.4,192.168.39.225
	W0814 16:28:21.939945   31878 proxy.go:119] fail to check proxy env: Error ip not in block
	W0814 16:28:21.939967   31878 proxy.go:119] fail to check proxy env: Error ip not in block
	I0814 16:28:21.939980   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:21.940538   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:21.940699   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:21.940787   31878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 16:28:21.940823   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	W0814 16:28:21.940913   31878 proxy.go:119] fail to check proxy env: Error ip not in block
	W0814 16:28:21.940936   31878 proxy.go:119] fail to check proxy env: Error ip not in block
	I0814 16:28:21.941003   31878 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 16:28:21.941025   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.943600   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.943862   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.944046   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.944071   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.944194   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.944314   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.944334   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.944368   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.944506   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.944549   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.944706   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.944713   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:28:21.944871   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.945030   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:28:22.182608   31878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 16:28:22.188514   31878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 16:28:22.188591   31878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:28:22.204201   31878 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 16:28:22.204225   31878 start.go:495] detecting cgroup driver to use...
	I0814 16:28:22.204293   31878 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 16:28:22.221315   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 16:28:22.237458   31878 docker.go:217] disabling cri-docker service (if available) ...
	I0814 16:28:22.237520   31878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 16:28:22.251459   31878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 16:28:22.264746   31878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 16:28:22.381397   31878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 16:28:22.531017   31878 docker.go:233] disabling docker service ...
	I0814 16:28:22.531088   31878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 16:28:22.544585   31878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 16:28:22.558165   31878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 16:28:22.696824   31878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 16:28:22.807601   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 16:28:22.821653   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 16:28:22.839262   31878 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 16:28:22.839342   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.850133   31878 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 16:28:22.850191   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.859788   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.869995   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.879459   31878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 16:28:22.889428   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.899777   31878 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.917167   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.927123   31878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 16:28:22.936357   31878 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 16:28:22.936408   31878 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 16:28:22.950536   31878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 16:28:22.959627   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:28:23.072935   31878 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 16:28:23.207339   31878 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 16:28:23.207426   31878 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 16:28:23.211816   31878 start.go:563] Will wait 60s for crictl version
	I0814 16:28:23.211878   31878 ssh_runner.go:195] Run: which crictl
	I0814 16:28:23.215943   31878 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 16:28:23.254626   31878 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 16:28:23.254707   31878 ssh_runner.go:195] Run: crio --version
	I0814 16:28:23.284346   31878 ssh_runner.go:195] Run: crio --version
	I0814 16:28:23.312383   31878 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 16:28:23.313724   31878 out.go:177]   - env NO_PROXY=192.168.39.4
	I0814 16:28:23.315140   31878 out.go:177]   - env NO_PROXY=192.168.39.4,192.168.39.225
	I0814 16:28:23.316419   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:28:23.319204   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:23.319704   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:23.319731   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:23.319984   31878 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 16:28:23.323956   31878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:28:23.336792   31878 mustload.go:65] Loading cluster: ha-597780
	I0814 16:28:23.337035   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:28:23.337414   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:28:23.337458   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:28:23.352506   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0814 16:28:23.353465   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:28:23.353923   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:28:23.353941   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:28:23.354257   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:28:23.354437   31878 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:28:23.356036   31878 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:28:23.356313   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:28:23.356344   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:28:23.370230   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32981
	I0814 16:28:23.370708   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:28:23.371061   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:28:23.371081   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:28:23.371375   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:28:23.371534   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:28:23.371698   31878 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780 for IP: 192.168.39.167
	I0814 16:28:23.371709   31878 certs.go:194] generating shared ca certs ...
	I0814 16:28:23.371721   31878 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:28:23.371843   31878 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 16:28:23.371899   31878 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 16:28:23.371909   31878 certs.go:256] generating profile certs ...
	I0814 16:28:23.371980   31878 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key
	I0814 16:28:23.372005   31878 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.c004033e
	I0814 16:28:23.372018   31878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.c004033e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.225 192.168.39.167 192.168.39.254]
	I0814 16:28:23.531346   31878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.c004033e ...
	I0814 16:28:23.531375   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.c004033e: {Name:mkf610138317689d6471fb37acfe2a421465e4a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:28:23.531526   31878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.c004033e ...
	I0814 16:28:23.531538   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.c004033e: {Name:mka58bc6a325725646d19898fe4916d2053e8c88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:28:23.531604   31878 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.c004033e -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt
	I0814 16:28:23.531741   31878 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.c004033e -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key
	I0814 16:28:23.531858   31878 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key
	I0814 16:28:23.531872   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 16:28:23.531884   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 16:28:23.531898   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 16:28:23.531912   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 16:28:23.531924   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 16:28:23.531936   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 16:28:23.531947   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 16:28:23.531960   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 16:28:23.532007   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 16:28:23.532033   31878 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 16:28:23.532041   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 16:28:23.532062   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 16:28:23.532082   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 16:28:23.532101   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 16:28:23.532136   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:28:23.532160   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:28:23.532173   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem -> /usr/share/ca-certificates/21177.pem
	I0814 16:28:23.532185   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /usr/share/ca-certificates/211772.pem
	I0814 16:28:23.532215   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:28:23.534797   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:28:23.535214   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:28:23.535240   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:28:23.535459   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:28:23.535634   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:28:23.535781   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:28:23.535876   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:28:23.607759   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0814 16:28:23.613426   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0814 16:28:23.624323   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0814 16:28:23.627973   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0814 16:28:23.637728   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0814 16:28:23.642181   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0814 16:28:23.652002   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0814 16:28:23.655744   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0814 16:28:23.665853   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0814 16:28:23.669519   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0814 16:28:23.679142   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0814 16:28:23.682748   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0814 16:28:23.692077   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 16:28:23.715909   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 16:28:23.738090   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 16:28:23.762049   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 16:28:23.784577   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0814 16:28:23.806319   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 16:28:23.829985   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 16:28:23.853752   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 16:28:23.876043   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 16:28:23.899547   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 16:28:23.922868   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 16:28:23.946022   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0814 16:28:23.960834   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0814 16:28:23.976274   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0814 16:28:23.991130   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0814 16:28:24.006609   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0814 16:28:24.021348   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0814 16:28:24.036293   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0814 16:28:24.051655   31878 ssh_runner.go:195] Run: openssl version
	I0814 16:28:24.056975   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 16:28:24.067045   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 16:28:24.071023   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 16:28:24.071070   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 16:28:24.076626   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 16:28:24.086718   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 16:28:24.096876   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:28:24.102009   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:28:24.102074   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:28:24.107679   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 16:28:24.118007   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 16:28:24.128176   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 16:28:24.132159   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 16:28:24.132226   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 16:28:24.137618   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 16:28:24.148077   31878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 16:28:24.151750   31878 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 16:28:24.151810   31878 kubeadm.go:934] updating node {m03 192.168.39.167 8443 v1.31.0 crio true true} ...
	I0814 16:28:24.151902   31878 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-597780-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 16:28:24.151936   31878 kube-vip.go:115] generating kube-vip config ...
	I0814 16:28:24.151978   31878 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0814 16:28:24.168492   31878 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0814 16:28:24.168553   31878 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0814 16:28:24.168622   31878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 16:28:24.177752   31878 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0814 16:28:24.177817   31878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0814 16:28:24.186720   31878 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0814 16:28:24.186743   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0814 16:28:24.186752   31878 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0814 16:28:24.186771   31878 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0814 16:28:24.186789   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0814 16:28:24.186800   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:28:24.186819   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0814 16:28:24.186849   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0814 16:28:24.203631   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0814 16:28:24.203708   31878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0814 16:28:24.203725   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0814 16:28:24.203732   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0814 16:28:24.203780   31878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0814 16:28:24.203810   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0814 16:28:24.212736   31878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0814 16:28:24.212771   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0814 16:28:25.014583   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0814 16:28:25.025211   31878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0814 16:28:25.042467   31878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 16:28:25.059345   31878 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0814 16:28:25.074711   31878 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0814 16:28:25.078397   31878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:28:25.090138   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:28:25.212030   31878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:28:25.232302   31878 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:28:25.232784   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:28:25.232837   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:28:25.250540   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I0814 16:28:25.251572   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:28:25.252132   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:28:25.252153   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:28:25.252499   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:28:25.252708   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:28:25.252852   31878 start.go:317] joinCluster: &{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:28:25.253023   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0814 16:28:25.253044   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:28:25.256193   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:28:25.256616   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:28:25.256642   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:28:25.256850   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:28:25.257048   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:28:25.257195   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:28:25.257339   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:28:25.405413   31878 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:28:25.405462   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qz6at2.4om312wgxwib85w4 --discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-597780-m03 --control-plane --apiserver-advertise-address=192.168.39.167 --apiserver-bind-port=8443"
	I0814 16:28:48.475052   31878 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qz6at2.4om312wgxwib85w4 --discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-597780-m03 --control-plane --apiserver-advertise-address=192.168.39.167 --apiserver-bind-port=8443": (23.069546291s)
	I0814 16:28:48.475092   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0814 16:28:49.048015   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-597780-m03 minikube.k8s.io/updated_at=2024_08_14T16_28_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=ha-597780 minikube.k8s.io/primary=false
	I0814 16:28:49.172851   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-597780-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0814 16:28:49.287202   31878 start.go:319] duration metric: took 24.034345482s to joinCluster
	I0814 16:28:49.287280   31878 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:28:49.287645   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:28:49.288915   31878 out.go:177] * Verifying Kubernetes components...
	I0814 16:28:49.290054   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:28:49.507735   31878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:28:49.566643   31878 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:28:49.566988   31878 kapi.go:59] client config for ha-597780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key", CAFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f170c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0814 16:28:49.567088   31878 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.4:8443
	I0814 16:28:49.567378   31878 node_ready.go:35] waiting up to 6m0s for node "ha-597780-m03" to be "Ready" ...
	I0814 16:28:49.567483   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:49.567496   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:49.567508   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:49.567514   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:49.570928   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:50.068602   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:50.068628   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:50.068641   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:50.068679   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:50.072059   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:50.568564   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:50.568592   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:50.568601   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:50.568606   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:50.572723   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:28:51.067584   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:51.067611   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:51.067631   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:51.067638   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:51.071023   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:51.568228   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:51.568250   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:51.568261   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:51.568266   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:51.571548   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:51.572032   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:28:52.067883   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:52.067905   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:52.067915   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:52.067920   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:52.070772   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:28:52.568598   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:52.568626   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:52.568637   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:52.568644   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:52.572038   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:53.067960   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:53.067986   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:53.067996   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:53.068001   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:53.071266   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:53.568203   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:53.568225   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:53.568233   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:53.568239   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:53.571473   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:53.572184   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:28:54.068455   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:54.068475   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:54.068488   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:54.068491   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:54.071339   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:28:54.567828   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:54.567856   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:54.567866   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:54.567874   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:54.574563   31878 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0814 16:28:55.068622   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:55.068647   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:55.068658   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:55.068663   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:55.071914   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:55.568556   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:55.568576   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:55.568582   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:55.568587   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:55.571804   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:55.572292   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:28:56.068546   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:56.068568   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:56.068578   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:56.068583   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:56.072131   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:56.568251   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:56.568281   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:56.568291   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:56.568298   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:56.571731   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:57.068349   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:57.068372   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:57.068379   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:57.068385   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:57.070869   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:28:57.568564   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:57.568584   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:57.568592   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:57.568598   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:57.571770   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:57.572608   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:28:58.068098   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:58.068123   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:58.068131   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:58.068136   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:58.071240   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:58.568369   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:58.568397   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:58.568407   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:58.568412   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:58.571837   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:59.068558   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:59.068593   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:59.068602   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:59.068608   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:59.071513   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:28:59.568430   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:59.568470   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:59.568477   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:59.568481   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:59.571759   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:00.068605   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:00.068627   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:00.068639   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:00.068647   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:00.072033   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:00.072740   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:29:00.568425   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:00.568450   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:00.568458   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:00.568465   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:00.572298   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:01.068204   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:01.068238   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:01.068250   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:01.068258   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:01.071244   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:01.568161   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:01.568188   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:01.568199   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:01.568205   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:01.571637   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:02.068242   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:02.068267   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:02.068276   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:02.068282   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:02.071343   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:02.568379   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:02.568403   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:02.568412   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:02.568417   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:02.571355   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:02.571886   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:29:03.067646   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:03.067667   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:03.067674   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:03.067679   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:03.070708   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:03.568581   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:03.568608   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:03.568619   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:03.568626   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:03.571505   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:04.067593   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:04.067634   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:04.067652   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:04.067656   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:04.070803   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:04.568233   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:04.568268   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:04.568278   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:04.568306   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:04.573076   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:29:04.573726   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:29:05.068223   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:05.068249   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:05.068263   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:05.068271   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:05.070929   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:05.568472   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:05.568504   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:05.568517   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:05.568524   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:05.571869   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:06.068545   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:06.068566   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:06.068574   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:06.068577   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:06.071859   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:06.568006   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:06.568033   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:06.568045   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:06.568051   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:06.571453   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:07.067849   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:07.067924   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.067950   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.067964   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.071472   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:07.072434   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:29:07.568575   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:07.568599   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.568608   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.568614   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.571596   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.572338   31878 node_ready.go:49] node "ha-597780-m03" has status "Ready":"True"
	I0814 16:29:07.572356   31878 node_ready.go:38] duration metric: took 18.004962293s for node "ha-597780-m03" to be "Ready" ...
	I0814 16:29:07.572364   31878 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:29:07.572424   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:29:07.572433   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.572440   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.572444   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.577495   31878 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0814 16:29:07.585157   31878 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-28k2m" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.585268   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-28k2m
	I0814 16:29:07.585286   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.585296   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.585303   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.588480   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:07.589251   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:07.589270   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.589281   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.589288   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.592447   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:07.593017   31878 pod_ready.go:92] pod "coredns-6f6b679f8f-28k2m" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:07.593040   31878 pod_ready.go:81] duration metric: took 7.850765ms for pod "coredns-6f6b679f8f-28k2m" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.593053   31878 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-kc84b" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.593142   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-kc84b
	I0814 16:29:07.593152   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.593162   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.593168   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.596200   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:07.596895   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:07.596909   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.596916   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.596921   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.599174   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.599650   31878 pod_ready.go:92] pod "coredns-6f6b679f8f-kc84b" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:07.599670   31878 pod_ready.go:81] duration metric: took 6.609573ms for pod "coredns-6f6b679f8f-kc84b" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.599682   31878 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.599747   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-597780
	I0814 16:29:07.599757   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.599767   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.599774   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.602031   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.602550   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:07.602566   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.602576   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.602582   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.605537   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.606107   31878 pod_ready.go:92] pod "etcd-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:07.606124   31878 pod_ready.go:81] duration metric: took 6.434528ms for pod "etcd-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.606132   31878 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.606177   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-597780-m02
	I0814 16:29:07.606184   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.606191   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.606197   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.608992   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.609493   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:07.609506   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.609513   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.609517   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.612196   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.612719   31878 pod_ready.go:92] pod "etcd-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:07.612739   31878 pod_ready.go:81] duration metric: took 6.600607ms for pod "etcd-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.612751   31878 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.769170   31878 request.go:632] Waited for 156.349582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-597780-m03
	I0814 16:29:07.769255   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-597780-m03
	I0814 16:29:07.769265   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.769276   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.769286   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.772462   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:07.969360   31878 request.go:632] Waited for 196.218172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:07.969411   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:07.969416   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.969423   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.969428   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.972339   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.972901   31878 pod_ready.go:92] pod "etcd-ha-597780-m03" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:07.972922   31878 pod_ready.go:81] duration metric: took 360.158993ms for pod "etcd-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.972943   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:08.169015   31878 request.go:632] Waited for 196.006672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780
	I0814 16:29:08.169109   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780
	I0814 16:29:08.169117   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:08.169128   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:08.169138   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:08.172166   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:08.369122   31878 request.go:632] Waited for 196.24583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:08.369190   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:08.369197   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:08.369207   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:08.369213   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:08.372255   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:08.372960   31878 pod_ready.go:92] pod "kube-apiserver-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:08.372977   31878 pod_ready.go:81] duration metric: took 400.026545ms for pod "kube-apiserver-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:08.372986   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:08.569453   31878 request.go:632] Waited for 196.397043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780-m02
	I0814 16:29:08.569511   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780-m02
	I0814 16:29:08.569516   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:08.569524   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:08.569528   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:08.572332   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:08.768650   31878 request.go:632] Waited for 195.20197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:08.768709   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:08.768716   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:08.768727   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:08.768737   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:08.771774   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:08.772261   31878 pod_ready.go:92] pod "kube-apiserver-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:08.772278   31878 pod_ready.go:81] duration metric: took 399.284844ms for pod "kube-apiserver-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:08.772288   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:08.968742   31878 request.go:632] Waited for 196.381006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780-m03
	I0814 16:29:08.968841   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780-m03
	I0814 16:29:08.968852   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:08.968864   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:08.968875   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:08.972046   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:09.169224   31878 request.go:632] Waited for 196.392353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:09.169290   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:09.169297   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:09.169307   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:09.169344   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:09.172100   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:09.172767   31878 pod_ready.go:92] pod "kube-apiserver-ha-597780-m03" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:09.172785   31878 pod_ready.go:81] duration metric: took 400.49136ms for pod "kube-apiserver-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:09.172797   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:09.368840   31878 request.go:632] Waited for 195.910201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780
	I0814 16:29:09.368910   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780
	I0814 16:29:09.368918   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:09.368928   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:09.368939   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:09.372517   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:09.568926   31878 request.go:632] Waited for 195.394269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:09.569000   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:09.569009   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:09.569018   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:09.569024   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:09.572245   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:09.572668   31878 pod_ready.go:92] pod "kube-controller-manager-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:09.572685   31878 pod_ready.go:81] duration metric: took 399.881647ms for pod "kube-controller-manager-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:09.572694   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:09.768898   31878 request.go:632] Waited for 196.11828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780-m02
	I0814 16:29:09.768960   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780-m02
	I0814 16:29:09.768968   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:09.768978   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:09.768988   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:09.773594   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:29:09.968689   31878 request.go:632] Waited for 194.254671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:09.968758   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:09.968774   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:09.968785   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:09.968793   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:09.971724   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:09.972375   31878 pod_ready.go:92] pod "kube-controller-manager-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:09.972394   31878 pod_ready.go:81] duration metric: took 399.693107ms for pod "kube-controller-manager-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:09.972404   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:10.169550   31878 request.go:632] Waited for 197.077109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780-m03
	I0814 16:29:10.169646   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780-m03
	I0814 16:29:10.169657   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:10.169669   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:10.169677   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:10.172716   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:10.368844   31878 request.go:632] Waited for 195.315402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:10.368949   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:10.368963   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:10.368972   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:10.368977   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:10.372288   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:10.372837   31878 pod_ready.go:92] pod "kube-controller-manager-ha-597780-m03" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:10.372858   31878 pod_ready.go:81] duration metric: took 400.448188ms for pod "kube-controller-manager-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:10.372870   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4q2dq" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:10.569017   31878 request.go:632] Waited for 196.052872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q2dq
	I0814 16:29:10.569075   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q2dq
	I0814 16:29:10.569081   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:10.569090   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:10.569099   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:10.572129   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:10.769223   31878 request.go:632] Waited for 196.399201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:10.769288   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:10.769296   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:10.769306   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:10.769311   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:10.772503   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:10.773054   31878 pod_ready.go:92] pod "kube-proxy-4q2dq" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:10.773075   31878 pod_ready.go:81] duration metric: took 400.188151ms for pod "kube-proxy-4q2dq" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:10.773088   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-79txl" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:10.969067   31878 request.go:632] Waited for 195.902033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-79txl
	I0814 16:29:10.969119   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-79txl
	I0814 16:29:10.969124   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:10.969131   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:10.969136   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:10.972148   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:11.169215   31878 request.go:632] Waited for 196.37647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:11.169306   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:11.169317   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:11.169328   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:11.169338   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:11.172144   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:11.172632   31878 pod_ready.go:92] pod "kube-proxy-79txl" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:11.172650   31878 pod_ready.go:81] duration metric: took 399.555003ms for pod "kube-proxy-79txl" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:11.172662   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97tjj" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:11.369808   31878 request.go:632] Waited for 196.984895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97tjj
	I0814 16:29:11.369925   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97tjj
	I0814 16:29:11.369932   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:11.369939   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:11.369947   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:11.373101   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:11.568936   31878 request.go:632] Waited for 195.065778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:11.569027   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:11.569042   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:11.569052   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:11.569058   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:11.571899   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:11.572502   31878 pod_ready.go:92] pod "kube-proxy-97tjj" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:11.572526   31878 pod_ready.go:81] duration metric: took 399.85308ms for pod "kube-proxy-97tjj" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:11.572540   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:11.769425   31878 request.go:632] Waited for 196.784299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780
	I0814 16:29:11.769485   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780
	I0814 16:29:11.769493   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:11.769502   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:11.769512   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:11.772657   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:11.969505   31878 request.go:632] Waited for 196.222574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:11.969586   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:11.969597   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:11.969607   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:11.969628   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:11.972738   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:11.973379   31878 pod_ready.go:92] pod "kube-scheduler-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:11.973403   31878 pod_ready.go:81] duration metric: took 400.847019ms for pod "kube-scheduler-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:11.973413   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:12.169525   31878 request.go:632] Waited for 196.045447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780-m02
	I0814 16:29:12.169619   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780-m02
	I0814 16:29:12.169630   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:12.169640   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:12.169648   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:12.172903   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:12.368792   31878 request.go:632] Waited for 195.312013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:12.368860   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:12.368867   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:12.368877   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:12.368882   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:12.371851   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:12.372325   31878 pod_ready.go:92] pod "kube-scheduler-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:12.372343   31878 pod_ready.go:81] duration metric: took 398.923788ms for pod "kube-scheduler-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:12.372352   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:12.569518   31878 request.go:632] Waited for 197.106752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780-m03
	I0814 16:29:12.569587   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780-m03
	I0814 16:29:12.569593   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:12.569601   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:12.569605   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:12.572556   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:12.768667   31878 request.go:632] Waited for 195.348501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:12.768748   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:12.768763   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:12.768791   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:12.768797   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:12.771628   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:12.772128   31878 pod_ready.go:92] pod "kube-scheduler-ha-597780-m03" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:12.772146   31878 pod_ready.go:81] duration metric: took 399.787744ms for pod "kube-scheduler-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:12.772156   31878 pod_ready.go:38] duration metric: took 5.199783055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:29:12.772190   31878 api_server.go:52] waiting for apiserver process to appear ...
	I0814 16:29:12.772270   31878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:29:12.789000   31878 api_server.go:72] duration metric: took 23.501684528s to wait for apiserver process to appear ...
	I0814 16:29:12.789024   31878 api_server.go:88] waiting for apiserver healthz status ...
	I0814 16:29:12.789045   31878 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I0814 16:29:12.793311   31878 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I0814 16:29:12.793380   31878 round_trippers.go:463] GET https://192.168.39.4:8443/version
	I0814 16:29:12.793386   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:12.793393   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:12.793399   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:12.794223   31878 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0814 16:29:12.794281   31878 api_server.go:141] control plane version: v1.31.0
	I0814 16:29:12.794293   31878 api_server.go:131] duration metric: took 5.262979ms to wait for apiserver health ...
	I0814 16:29:12.794303   31878 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 16:29:12.969628   31878 request.go:632] Waited for 175.246778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:29:12.969724   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:29:12.969735   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:12.969742   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:12.969746   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:12.975221   31878 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0814 16:29:12.981537   31878 system_pods.go:59] 24 kube-system pods found
	I0814 16:29:12.981574   31878 system_pods.go:61] "coredns-6f6b679f8f-28k2m" [ec3725c1-3e21-49b0-9caf-922ef1928ed8] Running
	I0814 16:29:12.981582   31878 system_pods.go:61] "coredns-6f6b679f8f-kc84b" [3a483f17-cab5-4090-abc6-808d84397a8a] Running
	I0814 16:29:12.981587   31878 system_pods.go:61] "etcd-ha-597780" [9af2f660-01fe-499f-902e-4988a5527c5a] Running
	I0814 16:29:12.981596   31878 system_pods.go:61] "etcd-ha-597780-m02" [c811879c-cf46-4c5b-aec2-6fa9aae64d13] Running
	I0814 16:29:12.981600   31878 system_pods.go:61] "etcd-ha-597780-m03" [7970e939-1b0d-4a5c-9d60-8cee7ac3cd63] Running
	I0814 16:29:12.981605   31878 system_pods.go:61] "kindnet-2p7zj" [c62a2c70-6ef9-44cb-9a04-9a519f8be934] Running
	I0814 16:29:12.981611   31878 system_pods.go:61] "kindnet-c8f8r" [b053dfba-820a-416f-9233-ececd7159e1e] Running
	I0814 16:29:12.981616   31878 system_pods.go:61] "kindnet-zm75h" [1e5eabaf-5973-4658-b12b-f7faf67b8af7] Running
	I0814 16:29:12.981621   31878 system_pods.go:61] "kube-apiserver-ha-597780" [8efb614b-9a4f-4029-aba3-e2183fb20627] Running
	I0814 16:29:12.981626   31878 system_pods.go:61] "kube-apiserver-ha-597780-m02" [26d7d4c8-6f40-4217-bf24-f9f94c9f8a79] Running
	I0814 16:29:12.981633   31878 system_pods.go:61] "kube-apiserver-ha-597780-m03" [dcfc0768-d66a-41fe-9dd5-44a7bd3de490] Running
	I0814 16:29:12.981642   31878 system_pods.go:61] "kube-controller-manager-ha-597780" [ad59b322-ee34-4041-af68-8b5ffcdff9dd] Running
	I0814 16:29:12.981648   31878 system_pods.go:61] "kube-controller-manager-ha-597780-m02" [a25ce1a0-cedb-40cd-ade3-ba63a4b69cd4] Running
	I0814 16:29:12.981656   31878 system_pods.go:61] "kube-controller-manager-ha-597780-m03" [79f9e4bd-bd33-424a-be78-d5175c11592e] Running
	I0814 16:29:12.981662   31878 system_pods.go:61] "kube-proxy-4q2dq" [9e95547c-001c-4942-b160-33e37a389820] Running
	I0814 16:29:12.981667   31878 system_pods.go:61] "kube-proxy-79txl" [ea48ab09-60d5-4133-accc-f3fd69a50c5d] Running
	I0814 16:29:12.981673   31878 system_pods.go:61] "kube-proxy-97tjj" [8de24848-3fe3-4be5-b78f-169457f28da3] Running
	I0814 16:29:12.981678   31878 system_pods.go:61] "kube-scheduler-ha-597780" [c1576ee1-5aed-4177-b37e-76786ceee1a1] Running
	I0814 16:29:12.981684   31878 system_pods.go:61] "kube-scheduler-ha-597780-m02" [cb250902-8200-423a-8bd3-463aebd7379c] Running
	I0814 16:29:12.981691   31878 system_pods.go:61] "kube-scheduler-ha-597780-m03" [42853b7f-be1d-4252-b062-3ef76e17b1c4] Running
	I0814 16:29:12.981697   31878 system_pods.go:61] "kube-vip-ha-597780" [a5738727-b1a0-4750-9e02-784278225ee4] Running
	I0814 16:29:12.981702   31878 system_pods.go:61] "kube-vip-ha-597780-m02" [c2f92dd8-8248-44a7-bc10-a91546e50eb9] Running
	I0814 16:29:12.981708   31878 system_pods.go:61] "kube-vip-ha-597780-m03" [37835783-8797-41c9-8141-3b54f9bf0642] Running
	I0814 16:29:12.981715   31878 system_pods.go:61] "storage-provisioner" [9939439d-cddd-4505-b554-b72f749269fd] Running
	I0814 16:29:12.981724   31878 system_pods.go:74] duration metric: took 187.414897ms to wait for pod list to return data ...
	I0814 16:29:12.981739   31878 default_sa.go:34] waiting for default service account to be created ...
	I0814 16:29:13.169121   31878 request.go:632] Waited for 187.288377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0814 16:29:13.169184   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0814 16:29:13.169189   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:13.169196   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:13.169200   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:13.172851   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:13.172967   31878 default_sa.go:45] found service account: "default"
	I0814 16:29:13.172983   31878 default_sa.go:55] duration metric: took 191.237857ms for default service account to be created ...
	I0814 16:29:13.172991   31878 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 16:29:13.369473   31878 request.go:632] Waited for 196.411676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:29:13.369524   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:29:13.369529   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:13.369537   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:13.369544   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:13.374488   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:29:13.382315   31878 system_pods.go:86] 24 kube-system pods found
	I0814 16:29:13.382348   31878 system_pods.go:89] "coredns-6f6b679f8f-28k2m" [ec3725c1-3e21-49b0-9caf-922ef1928ed8] Running
	I0814 16:29:13.382354   31878 system_pods.go:89] "coredns-6f6b679f8f-kc84b" [3a483f17-cab5-4090-abc6-808d84397a8a] Running
	I0814 16:29:13.382358   31878 system_pods.go:89] "etcd-ha-597780" [9af2f660-01fe-499f-902e-4988a5527c5a] Running
	I0814 16:29:13.382363   31878 system_pods.go:89] "etcd-ha-597780-m02" [c811879c-cf46-4c5b-aec2-6fa9aae64d13] Running
	I0814 16:29:13.382367   31878 system_pods.go:89] "etcd-ha-597780-m03" [7970e939-1b0d-4a5c-9d60-8cee7ac3cd63] Running
	I0814 16:29:13.382371   31878 system_pods.go:89] "kindnet-2p7zj" [c62a2c70-6ef9-44cb-9a04-9a519f8be934] Running
	I0814 16:29:13.382376   31878 system_pods.go:89] "kindnet-c8f8r" [b053dfba-820a-416f-9233-ececd7159e1e] Running
	I0814 16:29:13.382380   31878 system_pods.go:89] "kindnet-zm75h" [1e5eabaf-5973-4658-b12b-f7faf67b8af7] Running
	I0814 16:29:13.382384   31878 system_pods.go:89] "kube-apiserver-ha-597780" [8efb614b-9a4f-4029-aba3-e2183fb20627] Running
	I0814 16:29:13.382388   31878 system_pods.go:89] "kube-apiserver-ha-597780-m02" [26d7d4c8-6f40-4217-bf24-f9f94c9f8a79] Running
	I0814 16:29:13.382393   31878 system_pods.go:89] "kube-apiserver-ha-597780-m03" [dcfc0768-d66a-41fe-9dd5-44a7bd3de490] Running
	I0814 16:29:13.382400   31878 system_pods.go:89] "kube-controller-manager-ha-597780" [ad59b322-ee34-4041-af68-8b5ffcdff9dd] Running
	I0814 16:29:13.382405   31878 system_pods.go:89] "kube-controller-manager-ha-597780-m02" [a25ce1a0-cedb-40cd-ade3-ba63a4b69cd4] Running
	I0814 16:29:13.382410   31878 system_pods.go:89] "kube-controller-manager-ha-597780-m03" [79f9e4bd-bd33-424a-be78-d5175c11592e] Running
	I0814 16:29:13.382414   31878 system_pods.go:89] "kube-proxy-4q2dq" [9e95547c-001c-4942-b160-33e37a389820] Running
	I0814 16:29:13.382419   31878 system_pods.go:89] "kube-proxy-79txl" [ea48ab09-60d5-4133-accc-f3fd69a50c5d] Running
	I0814 16:29:13.382423   31878 system_pods.go:89] "kube-proxy-97tjj" [8de24848-3fe3-4be5-b78f-169457f28da3] Running
	I0814 16:29:13.382429   31878 system_pods.go:89] "kube-scheduler-ha-597780" [c1576ee1-5aed-4177-b37e-76786ceee1a1] Running
	I0814 16:29:13.382432   31878 system_pods.go:89] "kube-scheduler-ha-597780-m02" [cb250902-8200-423a-8bd3-463aebd7379c] Running
	I0814 16:29:13.382439   31878 system_pods.go:89] "kube-scheduler-ha-597780-m03" [42853b7f-be1d-4252-b062-3ef76e17b1c4] Running
	I0814 16:29:13.382443   31878 system_pods.go:89] "kube-vip-ha-597780" [a5738727-b1a0-4750-9e02-784278225ee4] Running
	I0814 16:29:13.382449   31878 system_pods.go:89] "kube-vip-ha-597780-m02" [c2f92dd8-8248-44a7-bc10-a91546e50eb9] Running
	I0814 16:29:13.382453   31878 system_pods.go:89] "kube-vip-ha-597780-m03" [37835783-8797-41c9-8141-3b54f9bf0642] Running
	I0814 16:29:13.382458   31878 system_pods.go:89] "storage-provisioner" [9939439d-cddd-4505-b554-b72f749269fd] Running
	I0814 16:29:13.382464   31878 system_pods.go:126] duration metric: took 209.465171ms to wait for k8s-apps to be running ...
	I0814 16:29:13.382474   31878 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 16:29:13.382540   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:29:13.397010   31878 system_svc.go:56] duration metric: took 14.527615ms WaitForService to wait for kubelet
	I0814 16:29:13.397039   31878 kubeadm.go:582] duration metric: took 24.10972781s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:29:13.397076   31878 node_conditions.go:102] verifying NodePressure condition ...
	I0814 16:29:13.569479   31878 request.go:632] Waited for 172.328639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes
	I0814 16:29:13.569543   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes
	I0814 16:29:13.569548   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:13.569555   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:13.569561   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:13.572794   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:13.574144   31878 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 16:29:13.574170   31878 node_conditions.go:123] node cpu capacity is 2
	I0814 16:29:13.574196   31878 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 16:29:13.574201   31878 node_conditions.go:123] node cpu capacity is 2
	I0814 16:29:13.574206   31878 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 16:29:13.574214   31878 node_conditions.go:123] node cpu capacity is 2
	I0814 16:29:13.574220   31878 node_conditions.go:105] duration metric: took 177.13766ms to run NodePressure ...
	I0814 16:29:13.574238   31878 start.go:241] waiting for startup goroutines ...
	I0814 16:29:13.574262   31878 start.go:255] writing updated cluster config ...
	I0814 16:29:13.574639   31878 ssh_runner.go:195] Run: rm -f paused
	I0814 16:29:13.625302   31878 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 16:29:13.626803   31878 out.go:177] * Done! kubectl is now configured to use "ha-597780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.684683643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653173684658534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed16c725-4889-47b8-b14a-2ba5f21b106d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.685337111Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f11255b6-4120-43a2-b6fe-69970be858df name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.685391030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f11255b6-4120-43a2-b6fe-69970be858df name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.685650265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723652958530026773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778223379570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778200551048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdde6ae1e8d74427216ede0d7dad128cd2183769f04fab964ea0060a3dd2b1ee,PodSandboxId:4c5c92213f0e6251be7e29adcda3cded019246457065d5c0b303c9d621a74ab5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723652778118596170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723652766365172600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172365276
2447290664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67f9d9915d534085918d0529b19548940cd4887f3fcff515d5c5cf62eece770,PodSandboxId:81fcaf0428bd7b15c5487925be0aaccb835f08d18cf3b4649f532fdc79b8e9e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172365275328
8661962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 498bfc5ba79cf3931c7cca41edd994ee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723652750804081450,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723652750785186368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72903e605408111be84917c525af67e79889822f24a9cf8ba1b60605ecc495fd,PodSandboxId:44348a00d6f65407f29b608c7166f2039a3b9bc56b2a09eb9ba311632aa6d825,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723652750790958720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad80a864cc602ff3ed5231f18c40e60acb39b91e37eb9ecf4ac327c268587ea,PodSandboxId:004f1d9c571dd53906206c8edf18cc3624d52580711e76f40e3a2430cee0abf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723652750648705145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f11255b6-4120-43a2-b6fe-69970be858df name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.731203152Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55ad1464-d823-4d20-a5a1-6e837591f4b9 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.731357483Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55ad1464-d823-4d20-a5a1-6e837591f4b9 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.732863023Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b63470ae-42a5-4ba3-b68d-48e902897f95 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.733679601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653173733651074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b63470ae-42a5-4ba3-b68d-48e902897f95 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.734389159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c3c0c03-f58a-4380-8e24-18da97ffa5f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.734457847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c3c0c03-f58a-4380-8e24-18da97ffa5f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.734677256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723652958530026773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778223379570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778200551048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdde6ae1e8d74427216ede0d7dad128cd2183769f04fab964ea0060a3dd2b1ee,PodSandboxId:4c5c92213f0e6251be7e29adcda3cded019246457065d5c0b303c9d621a74ab5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723652778118596170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723652766365172600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172365276
2447290664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67f9d9915d534085918d0529b19548940cd4887f3fcff515d5c5cf62eece770,PodSandboxId:81fcaf0428bd7b15c5487925be0aaccb835f08d18cf3b4649f532fdc79b8e9e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172365275328
8661962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 498bfc5ba79cf3931c7cca41edd994ee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723652750804081450,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723652750785186368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72903e605408111be84917c525af67e79889822f24a9cf8ba1b60605ecc495fd,PodSandboxId:44348a00d6f65407f29b608c7166f2039a3b9bc56b2a09eb9ba311632aa6d825,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723652750790958720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad80a864cc602ff3ed5231f18c40e60acb39b91e37eb9ecf4ac327c268587ea,PodSandboxId:004f1d9c571dd53906206c8edf18cc3624d52580711e76f40e3a2430cee0abf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723652750648705145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c3c0c03-f58a-4380-8e24-18da97ffa5f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.770154881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca7060bf-de4b-4bc2-bac8-04657dfac1a0 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.770288591Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca7060bf-de4b-4bc2-bac8-04657dfac1a0 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.771578773Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fa7057e-0bfa-4e22-b9b7-855c967f1f6d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.772027041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653173772004184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fa7057e-0bfa-4e22-b9b7-855c967f1f6d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.772476537Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2b2b48a-c704-4ded-985d-2264da43b014 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.772555077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2b2b48a-c704-4ded-985d-2264da43b014 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.772787540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723652958530026773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778223379570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778200551048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdde6ae1e8d74427216ede0d7dad128cd2183769f04fab964ea0060a3dd2b1ee,PodSandboxId:4c5c92213f0e6251be7e29adcda3cded019246457065d5c0b303c9d621a74ab5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723652778118596170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723652766365172600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172365276
2447290664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67f9d9915d534085918d0529b19548940cd4887f3fcff515d5c5cf62eece770,PodSandboxId:81fcaf0428bd7b15c5487925be0aaccb835f08d18cf3b4649f532fdc79b8e9e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172365275328
8661962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 498bfc5ba79cf3931c7cca41edd994ee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723652750804081450,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723652750785186368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72903e605408111be84917c525af67e79889822f24a9cf8ba1b60605ecc495fd,PodSandboxId:44348a00d6f65407f29b608c7166f2039a3b9bc56b2a09eb9ba311632aa6d825,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723652750790958720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad80a864cc602ff3ed5231f18c40e60acb39b91e37eb9ecf4ac327c268587ea,PodSandboxId:004f1d9c571dd53906206c8edf18cc3624d52580711e76f40e3a2430cee0abf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723652750648705145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2b2b48a-c704-4ded-985d-2264da43b014 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.806524350Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10370e04-8e41-4f74-9cd3-530dba7d3f0b name=/runtime.v1.RuntimeService/Version
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.806615158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10370e04-8e41-4f74-9cd3-530dba7d3f0b name=/runtime.v1.RuntimeService/Version
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.807715474Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4aa227c4-5a6b-49a0-b7c3-206b9183ce41 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.808303249Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653173808280467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4aa227c4-5a6b-49a0-b7c3-206b9183ce41 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.808947392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1af007a9-ccfa-4302-a963-08a4f1076a76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.809004439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1af007a9-ccfa-4302-a963-08a4f1076a76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:32:53 ha-597780 crio[678]: time="2024-08-14 16:32:53.809290835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723652958530026773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778223379570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778200551048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdde6ae1e8d74427216ede0d7dad128cd2183769f04fab964ea0060a3dd2b1ee,PodSandboxId:4c5c92213f0e6251be7e29adcda3cded019246457065d5c0b303c9d621a74ab5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723652778118596170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723652766365172600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172365276
2447290664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67f9d9915d534085918d0529b19548940cd4887f3fcff515d5c5cf62eece770,PodSandboxId:81fcaf0428bd7b15c5487925be0aaccb835f08d18cf3b4649f532fdc79b8e9e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172365275328
8661962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 498bfc5ba79cf3931c7cca41edd994ee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723652750804081450,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723652750785186368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72903e605408111be84917c525af67e79889822f24a9cf8ba1b60605ecc495fd,PodSandboxId:44348a00d6f65407f29b608c7166f2039a3b9bc56b2a09eb9ba311632aa6d825,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723652750790958720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad80a864cc602ff3ed5231f18c40e60acb39b91e37eb9ecf4ac327c268587ea,PodSandboxId:004f1d9c571dd53906206c8edf18cc3624d52580711e76f40e3a2430cee0abf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723652750648705145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1af007a9-ccfa-4302-a963-08a4f1076a76 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e27a742b157d3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   24fc5367bc64f       busybox-7dff88458-rq7wd
	422bd8a4c6f73       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   103da86315438       coredns-6f6b679f8f-28k2m
	e6f5722727045       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   6b4d32c83825a       coredns-6f6b679f8f-kc84b
	fdde6ae1e8d74       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   4c5c92213f0e6       storage-provisioner
	9383508aacb47       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   7c496d8d976b0       kindnet-zm75h
	37ced76497679       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   403a7dadd2cf1       kube-proxy-79txl
	f67f9d9915d53       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   81fcaf0428bd7       kube-vip-ha-597780
	be37bacc58210       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   c3627f4eb5471       etcd-ha-597780
	72903e6054081       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   44348a00d6f65       kube-controller-manager-ha-597780
	9049789221ccd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   dfba8d4d791ac       kube-scheduler-ha-597780
	4ad80a864cc60       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   004f1d9c571dd       kube-apiserver-ha-597780
	
	
	==> coredns [422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c] <==
	[INFO] 10.244.2.2:35482 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159892s
	[INFO] 10.244.2.2:45275 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127461s
	[INFO] 10.244.0.4:43753 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123741s
	[INFO] 10.244.0.4:33481 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001866295s
	[INFO] 10.244.0.4:45903 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132096s
	[INFO] 10.244.0.4:38858 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001421125s
	[INFO] 10.244.1.2:43848 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094776s
	[INFO] 10.244.1.2:34489 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00124314s
	[INFO] 10.244.1.2:37019 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075532s
	[INFO] 10.244.1.2:33970 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072251s
	[INFO] 10.244.1.2:54832 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154144s
	[INFO] 10.244.2.2:44899 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157073s
	[INFO] 10.244.2.2:57059 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000141689s
	[INFO] 10.244.2.2:36168 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009915s
	[INFO] 10.244.0.4:54131 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070841s
	[INFO] 10.244.0.4:55620 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091367s
	[INFO] 10.244.0.4:43235 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075669s
	[INFO] 10.244.1.2:41689 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119685s
	[INFO] 10.244.1.2:59902 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124326s
	[INFO] 10.244.2.2:40926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109376s
	[INFO] 10.244.2.2:51410 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177337s
	[INFO] 10.244.0.4:34296 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121681s
	[INFO] 10.244.1.2:46660 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107008s
	[INFO] 10.244.1.2:58922 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127256s
	[INFO] 10.244.1.2:50299 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110499s
	
	
	==> coredns [e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d] <==
	[INFO] 10.244.2.2:48502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000993326s
	[INFO] 10.244.2.2:58814 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00702444s
	[INFO] 10.244.1.2:38201 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001498033s
	[INFO] 10.244.1.2:46765 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000402533s
	[INFO] 10.244.1.2:60614 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001481239s
	[INFO] 10.244.2.2:59844 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000200712s
	[INFO] 10.244.2.2:41213 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139289s
	[INFO] 10.244.2.2:59870 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168386s
	[INFO] 10.244.0.4:37158 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073468s
	[INFO] 10.244.0.4:39161 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108787s
	[INFO] 10.244.0.4:39022 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165944s
	[INFO] 10.244.0.4:57473 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073383s
	[INFO] 10.244.1.2:44098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115699s
	[INFO] 10.244.1.2:33898 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001695302s
	[INFO] 10.244.1.2:48541 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080109s
	[INFO] 10.244.2.2:54351 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123574s
	[INFO] 10.244.0.4:59667 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011063s
	[INFO] 10.244.1.2:44877 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127432s
	[INFO] 10.244.1.2:57437 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119268s
	[INFO] 10.244.2.2:57502 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109191s
	[INFO] 10.244.2.2:34873 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084958s
	[INFO] 10.244.0.4:38163 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100276s
	[INFO] 10.244.0.4:57638 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133846s
	[INFO] 10.244.0.4:41879 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000064694s
	[INFO] 10.244.1.2:53124 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175486s
	
	
	==> describe nodes <==
	Name:               ha-597780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T16_26_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:25:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:32:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:29:33 +0000   Wed, 14 Aug 2024 16:25:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:29:33 +0000   Wed, 14 Aug 2024 16:25:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:29:33 +0000   Wed, 14 Aug 2024 16:25:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:29:33 +0000   Wed, 14 Aug 2024 16:26:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-597780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 380f2e1fef9b4a7ba6d1d939cb1bae1a
	  System UUID:                380f2e1f-ef9b-4a7b-a6d1-d939cb1bae1a
	  Boot ID:                    aa55ed43-2220-4096-a571-51cd5b70ed86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rq7wd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 coredns-6f6b679f8f-28k2m             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m53s
	  kube-system                 coredns-6f6b679f8f-kc84b             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m53s
	  kube-system                 etcd-ha-597780                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m55s
	  kube-system                 kindnet-zm75h                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m53s
	  kube-system                 kube-apiserver-ha-597780             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-controller-manager-ha-597780    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-proxy-79txl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m53s
	  kube-system                 kube-scheduler-ha-597780             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 kube-vip-ha-597780                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m51s  kube-proxy       
	  Normal  Starting                 6m55s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m55s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m55s  kubelet          Node ha-597780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m55s  kubelet          Node ha-597780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m55s  kubelet          Node ha-597780 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m54s  node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Normal  NodeReady                6m37s  kubelet          Node ha-597780 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Normal  RegisteredNode           4m     node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	
	
	Name:               ha-597780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T16_27_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:27:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:30:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 14 Aug 2024 16:29:36 +0000   Wed, 14 Aug 2024 16:31:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 14 Aug 2024 16:29:36 +0000   Wed, 14 Aug 2024 16:31:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 14 Aug 2024 16:29:36 +0000   Wed, 14 Aug 2024 16:31:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 14 Aug 2024 16:29:36 +0000   Wed, 14 Aug 2024 16:31:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-597780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a36bc81f5b549f48c64d8093b0c45f0
	  System UUID:                2a36bc81-f5b5-49f4-8c64-d8093b0c45f0
	  Boot ID:                    cbc02bb3-0be5-453b-8e50-9b929e5b8c87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w9lh2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 etcd-ha-597780-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m18s
	  kube-system                 kindnet-c8f8r                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m20s
	  kube-system                 kube-apiserver-ha-597780-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-controller-manager-ha-597780-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-proxy-4q2dq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-scheduler-ha-597780-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-vip-ha-597780-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m21s)  kubelet          Node ha-597780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m21s)  kubelet          Node ha-597780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x7 over 5m21s)  kubelet          Node ha-597780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-597780-m02 status is now: NodeNotReady
	
	
	Name:               ha-597780-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T16_28_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:28:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:32:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:29:47 +0000   Wed, 14 Aug 2024 16:28:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:29:47 +0000   Wed, 14 Aug 2024 16:28:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:29:47 +0000   Wed, 14 Aug 2024 16:28:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:29:47 +0000   Wed, 14 Aug 2024 16:29:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    ha-597780-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ad778cd276b4853bc1e6d49295cbd2e
	  System UUID:                6ad778cd-276b-4853-bc1e-6d49295cbd2e
	  Boot ID:                    bd84ee8a-9079-478b-80c5-90f2f9e71408
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-27k42                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 etcd-ha-597780-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m6s
	  kube-system                 kindnet-2p7zj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m8s
	  kube-system                 kube-apiserver-ha-597780-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-controller-manager-ha-597780-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-proxy-97tjj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-ha-597780-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-vip-ha-597780-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-597780-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-597780-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-597780-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-597780-m03 event: Registered Node ha-597780-m03 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-597780-m03 event: Registered Node ha-597780-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-597780-m03 event: Registered Node ha-597780-m03 in Controller
	
	
	Name:               ha-597780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T16_29_55_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:29:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:32:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:30:25 +0000   Wed, 14 Aug 2024 16:29:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:30:25 +0000   Wed, 14 Aug 2024 16:29:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:30:25 +0000   Wed, 14 Aug 2024 16:29:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:30:25 +0000   Wed, 14 Aug 2024 16:30:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.209
	  Hostname:    ha-597780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0fa932f445844ff7a66a64ac6cdf169b
	  System UUID:                0fa932f4-4584-4ff7-a66a-64ac6cdf169b
	  Boot ID:                    305597ed-d6ab-49f8-ae00-26804526aa5c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5x5s7       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m
	  kube-system                 kube-proxy-bmf62    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m55s              kube-proxy       
	  Normal  RegisteredNode           3m                 node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m (x8 over 3m1s)  kubelet          Node ha-597780-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x8 over 3m1s)  kubelet          Node ha-597780-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x7 over 3m1s)  kubelet          Node ha-597780-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m59s              node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal  RegisteredNode           2m57s              node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	
	
	==> dmesg <==
	[Aug14 16:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050534] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036884] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.713616] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.759222] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.575706] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.613825] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.065926] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069239] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.173403] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.130531] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.250569] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +3.824868] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +3.756438] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.057963] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.054111] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.086455] kauditd_printk_skb: 79 callbacks suppressed
	[Aug14 16:26] kauditd_printk_skb: 62 callbacks suppressed
	[Aug14 16:27] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905] <==
	{"level":"warn","ts":"2024-08-14T16:32:53.851916Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:53.951788Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.042174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.050016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.051626Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.059829Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.063979Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.073266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.092690Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.103269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.131723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.135067Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.138362Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.144402Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.150454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.152644Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.157608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.160981Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.163911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.167659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.173275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.178290Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.179272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:32:54.238620Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"64b82df06bebb0af","rtt":"905.125µs","error":"dial tcp 192.168.39.225:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-14T16:32:54.238710Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"64b82df06bebb0af","rtt":"8.15593ms","error":"dial tcp 192.168.39.225:2380: connect: no route to host"}
	
	
	==> kernel <==
	 16:32:54 up 7 min,  0 users,  load average: 0.16, 0.27, 0.18
	Linux ha-597780 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1] <==
	I0814 16:32:17.358883       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:32:27.366915       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:32:27.367125       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:32:27.367436       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:32:27.367526       1 main.go:299] handling current node
	I0814 16:32:27.367566       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:32:27.367585       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:32:27.367665       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:32:27.367684       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	I0814 16:32:37.365364       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:32:37.365428       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	I0814 16:32:37.365585       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:32:37.365604       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:32:37.365668       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:32:37.365686       1 main.go:299] handling current node
	I0814 16:32:37.365705       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:32:37.365710       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:32:47.365354       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:32:47.365415       1 main.go:299] handling current node
	I0814 16:32:47.365432       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:32:47.365440       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:32:47.365653       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:32:47.365681       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	I0814 16:32:47.365759       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:32:47.365767       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4ad80a864cc602ff3ed5231f18c40e60acb39b91e37eb9ecf4ac327c268587ea] <==
	W0814 16:25:55.762483       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4]
	I0814 16:25:55.763280       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 16:25:55.769816       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0814 16:25:56.090514       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0814 16:25:59.885282       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0814 16:25:59.904974       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0814 16:25:59.913868       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0814 16:26:01.337367       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0814 16:26:01.746147       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0814 16:29:19.129394       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41740: use of closed network connection
	E0814 16:29:19.372915       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41758: use of closed network connection
	E0814 16:29:19.550858       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41774: use of closed network connection
	E0814 16:29:19.734746       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41792: use of closed network connection
	E0814 16:29:19.909648       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41808: use of closed network connection
	E0814 16:29:20.076996       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41828: use of closed network connection
	E0814 16:29:20.246071       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41844: use of closed network connection
	E0814 16:29:20.411630       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41862: use of closed network connection
	E0814 16:29:20.589195       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41888: use of closed network connection
	E0814 16:29:20.865814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41914: use of closed network connection
	E0814 16:29:21.043561       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41936: use of closed network connection
	E0814 16:29:21.218997       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41966: use of closed network connection
	E0814 16:29:21.388922       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41992: use of closed network connection
	E0814 16:29:21.560524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42000: use of closed network connection
	E0814 16:29:21.735637       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42006: use of closed network connection
	W0814 16:30:45.784712       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.167 192.168.39.4]
	
	
	==> kube-controller-manager [72903e605408111be84917c525af67e79889822f24a9cf8ba1b60605ecc495fd] <==
	I0814 16:29:54.600855       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-597780-m04" podCIDRs=["10.244.3.0/24"]
	I0814 16:29:54.600902       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:54.600975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:54.620307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:54.838715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:55.055737       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:55.435910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:55.972247       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:55.973019       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-597780-m04"
	I0814 16:29:56.014610       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:57.234815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:57.322839       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:30:04.884780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:30:13.554499       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:30:13.555272       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-597780-m04"
	I0814 16:30:13.570596       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:30:14.778502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:30:25.273275       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:31:09.806307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m02"
	I0814 16:31:09.806702       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-597780-m04"
	I0814 16:31:09.827514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m02"
	I0814 16:31:09.884153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.333879ms"
	I0814 16:31:09.885393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.009µs"
	I0814 16:31:11.084544       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m02"
	I0814 16:31:15.055155       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m02"
	
	
	==> kube-proxy [37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 16:26:02.673675       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 16:26:02.694314       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	E0814 16:26:02.694393       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 16:26:02.727764       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 16:26:02.727815       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 16:26:02.727845       1 server_linux.go:169] "Using iptables Proxier"
	I0814 16:26:02.729922       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 16:26:02.730197       1 server.go:483] "Version info" version="v1.31.0"
	I0814 16:26:02.730270       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:26:02.732001       1 config.go:197] "Starting service config controller"
	I0814 16:26:02.732031       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 16:26:02.732048       1 config.go:104] "Starting endpoint slice config controller"
	I0814 16:26:02.732051       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 16:26:02.734298       1 config.go:326] "Starting node config controller"
	I0814 16:26:02.734385       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 16:26:02.832657       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 16:26:02.832736       1 shared_informer.go:320] Caches are synced for service config
	I0814 16:26:02.834437       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e] <==
	W0814 16:25:55.124761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 16:25:55.124856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:25:55.134951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 16:25:55.135030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:25:55.234922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 16:25:55.235107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 16:25:55.275533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 16:25:55.275674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:25:55.384531       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 16:25:55.384674       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 16:25:55.440408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 16:25:55.440501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0814 16:25:57.150779       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0814 16:29:14.511741       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9lh2\": pod busybox-7dff88458-w9lh2 is already assigned to node \"ha-597780-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w9lh2" node="ha-597780-m02"
	E0814 16:29:14.513586       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d61c6e28-3a9c-47b5-ad97-6d1c77c30857(default/busybox-7dff88458-w9lh2) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-w9lh2"
	E0814 16:29:14.513669       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9lh2\": pod busybox-7dff88458-w9lh2 is already assigned to node \"ha-597780-m02\"" pod="default/busybox-7dff88458-w9lh2"
	I0814 16:29:14.513886       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-w9lh2" node="ha-597780-m02"
	E0814 16:29:14.544849       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-27k42\": pod busybox-7dff88458-27k42 is already assigned to node \"ha-597780-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-27k42" node="ha-597780-m03"
	E0814 16:29:14.544959       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-27k42\": pod busybox-7dff88458-27k42 is already assigned to node \"ha-597780-m03\"" pod="default/busybox-7dff88458-27k42"
	E0814 16:29:14.545719       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rq7wd\": pod busybox-7dff88458-rq7wd is already assigned to node \"ha-597780\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rq7wd" node="ha-597780"
	E0814 16:29:14.557325       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rq7wd\": pod busybox-7dff88458-rq7wd is already assigned to node \"ha-597780\"" pod="default/busybox-7dff88458-rq7wd"
	E0814 16:29:54.657005       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5x5s7\": pod kindnet-5x5s7 is already assigned to node \"ha-597780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5x5s7" node="ha-597780-m04"
	E0814 16:29:54.657112       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 45af1890-2443-48af-a4f1-38ce0ab0f558(kube-system/kindnet-5x5s7) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5x5s7"
	E0814 16:29:54.657139       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5x5s7\": pod kindnet-5x5s7 is already assigned to node \"ha-597780-m04\"" pod="kube-system/kindnet-5x5s7"
	I0814 16:29:54.657164       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5x5s7" node="ha-597780-m04"
	
	
	==> kubelet <==
	Aug 14 16:31:19 ha-597780 kubelet[1315]: E0814 16:31:19.974916    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653079974606646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:31:19 ha-597780 kubelet[1315]: E0814 16:31:19.975201    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653079974606646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:31:29 ha-597780 kubelet[1315]: E0814 16:31:29.977184    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653089976869657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:31:29 ha-597780 kubelet[1315]: E0814 16:31:29.977278    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653089976869657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:31:39 ha-597780 kubelet[1315]: E0814 16:31:39.979401    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653099978723937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:31:39 ha-597780 kubelet[1315]: E0814 16:31:39.979445    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653099978723937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:31:49 ha-597780 kubelet[1315]: E0814 16:31:49.981937    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653109981340173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:31:49 ha-597780 kubelet[1315]: E0814 16:31:49.981973    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653109981340173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:31:59 ha-597780 kubelet[1315]: E0814 16:31:59.866389    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 16:31:59 ha-597780 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 16:31:59 ha-597780 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 16:31:59 ha-597780 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 16:31:59 ha-597780 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 16:31:59 ha-597780 kubelet[1315]: E0814 16:31:59.983669    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653119983202123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:31:59 ha-597780 kubelet[1315]: E0814 16:31:59.983709    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653119983202123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:09 ha-597780 kubelet[1315]: E0814 16:32:09.985777    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653129985396166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:09 ha-597780 kubelet[1315]: E0814 16:32:09.985823    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653129985396166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:19 ha-597780 kubelet[1315]: E0814 16:32:19.990013    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653139989535570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:19 ha-597780 kubelet[1315]: E0814 16:32:19.990065    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653139989535570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:29 ha-597780 kubelet[1315]: E0814 16:32:29.991918    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653149991668287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:29 ha-597780 kubelet[1315]: E0814 16:32:29.991990    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653149991668287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:39 ha-597780 kubelet[1315]: E0814 16:32:39.994199    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653159993729013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:39 ha-597780 kubelet[1315]: E0814 16:32:39.994853    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653159993729013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:49 ha-597780 kubelet[1315]: E0814 16:32:49.996731    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653169996427038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:49 ha-597780 kubelet[1315]: E0814 16:32:49.997002    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653169996427038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-597780 -n ha-597780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-597780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (59.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr: exit status 3 (3.19181469s)

                                                
                                                
-- stdout --
	ha-597780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-597780-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:32:58.687749   36836 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:32:58.687853   36836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:32:58.687861   36836 out.go:304] Setting ErrFile to fd 2...
	I0814 16:32:58.687865   36836 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:32:58.688029   36836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:32:58.688175   36836 out.go:298] Setting JSON to false
	I0814 16:32:58.688199   36836 mustload.go:65] Loading cluster: ha-597780
	I0814 16:32:58.688219   36836 notify.go:220] Checking for updates...
	I0814 16:32:58.688571   36836 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:32:58.688594   36836 status.go:255] checking status of ha-597780 ...
	I0814 16:32:58.689024   36836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:58.689077   36836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:58.708408   36836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I0814 16:32:58.708917   36836 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:58.709445   36836 main.go:141] libmachine: Using API Version  1
	I0814 16:32:58.709467   36836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:58.709797   36836 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:58.709977   36836 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:32:58.711461   36836 status.go:330] ha-597780 host status = "Running" (err=<nil>)
	I0814 16:32:58.711477   36836 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:32:58.711762   36836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:58.711796   36836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:58.726028   36836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I0814 16:32:58.726440   36836 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:58.726939   36836 main.go:141] libmachine: Using API Version  1
	I0814 16:32:58.726962   36836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:58.727223   36836 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:58.727445   36836 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:32:58.730235   36836 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:32:58.730660   36836 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:32:58.730692   36836 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:32:58.730803   36836 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:32:58.731247   36836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:58.731345   36836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:58.745770   36836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0814 16:32:58.746136   36836 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:58.746597   36836 main.go:141] libmachine: Using API Version  1
	I0814 16:32:58.746618   36836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:58.746896   36836 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:58.747119   36836 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:32:58.747295   36836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:32:58.747355   36836 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:32:58.750019   36836 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:32:58.750479   36836 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:32:58.750529   36836 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:32:58.750723   36836 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:32:58.750928   36836 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:32:58.751085   36836 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:32:58.751198   36836 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:32:58.826899   36836 ssh_runner.go:195] Run: systemctl --version
	I0814 16:32:58.832383   36836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:32:58.846847   36836 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:32:58.846874   36836 api_server.go:166] Checking apiserver status ...
	I0814 16:32:58.846911   36836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:32:58.864700   36836 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	W0814 16:32:58.874290   36836 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:32:58.874356   36836 ssh_runner.go:195] Run: ls
	I0814 16:32:58.878164   36836 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:32:58.882485   36836 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:32:58.882504   36836 status.go:422] ha-597780 apiserver status = Running (err=<nil>)
	I0814 16:32:58.882513   36836 status.go:257] ha-597780 status: &{Name:ha-597780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:32:58.882526   36836 status.go:255] checking status of ha-597780-m02 ...
	I0814 16:32:58.882797   36836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:58.882844   36836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:58.897615   36836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42103
	I0814 16:32:58.898008   36836 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:58.898439   36836 main.go:141] libmachine: Using API Version  1
	I0814 16:32:58.898459   36836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:58.898757   36836 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:58.898926   36836 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:32:58.900429   36836 status.go:330] ha-597780-m02 host status = "Running" (err=<nil>)
	I0814 16:32:58.900445   36836 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:32:58.900720   36836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:58.900771   36836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:58.914724   36836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44187
	I0814 16:32:58.915107   36836 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:58.915603   36836 main.go:141] libmachine: Using API Version  1
	I0814 16:32:58.915631   36836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:58.915920   36836 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:58.916080   36836 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:32:58.918496   36836 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:32:58.918918   36836 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:32:58.918945   36836 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:32:58.919080   36836 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:32:58.919415   36836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:32:58.919450   36836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:32:58.933561   36836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0814 16:32:58.933906   36836 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:32:58.934413   36836 main.go:141] libmachine: Using API Version  1
	I0814 16:32:58.934428   36836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:32:58.934700   36836 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:32:58.935016   36836 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:32:58.935195   36836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:32:58.935212   36836 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:32:58.937532   36836 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:32:58.937951   36836 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:32:58.937970   36836 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:32:58.938187   36836 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:32:58.938331   36836 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:32:58.938502   36836 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:32:58.938644   36836 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	W0814 16:33:01.499587   36836 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.225:22: connect: no route to host
	W0814 16:33:01.499693   36836 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	E0814 16:33:01.499707   36836 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:01.499714   36836 status.go:257] ha-597780-m02 status: &{Name:ha-597780-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0814 16:33:01.499727   36836 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:01.499734   36836 status.go:255] checking status of ha-597780-m03 ...
	I0814 16:33:01.500036   36836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:01.500085   36836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:01.515046   36836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39197
	I0814 16:33:01.515579   36836 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:01.516081   36836 main.go:141] libmachine: Using API Version  1
	I0814 16:33:01.516102   36836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:01.516476   36836 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:01.516675   36836 main.go:141] libmachine: (ha-597780-m03) Calling .GetState
	I0814 16:33:01.518878   36836 status.go:330] ha-597780-m03 host status = "Running" (err=<nil>)
	I0814 16:33:01.518897   36836 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:01.519226   36836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:01.519263   36836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:01.533813   36836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35641
	I0814 16:33:01.534297   36836 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:01.534746   36836 main.go:141] libmachine: Using API Version  1
	I0814 16:33:01.534765   36836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:01.535056   36836 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:01.535228   36836 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:33:01.538323   36836 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:01.538883   36836 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:01.538918   36836 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:01.539010   36836 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:01.539301   36836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:01.539358   36836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:01.553988   36836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
	I0814 16:33:01.554466   36836 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:01.554972   36836 main.go:141] libmachine: Using API Version  1
	I0814 16:33:01.555002   36836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:01.555370   36836 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:01.555531   36836 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:33:01.555731   36836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:01.555749   36836 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:33:01.558508   36836 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:01.558999   36836 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:01.559030   36836 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:01.559288   36836 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:33:01.559488   36836 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:33:01.559692   36836 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:33:01.559834   36836 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:33:01.642282   36836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:01.657174   36836 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:01.657197   36836 api_server.go:166] Checking apiserver status ...
	I0814 16:33:01.657227   36836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:01.670465   36836 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup
	W0814 16:33:01.679237   36836 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:01.679292   36836 ssh_runner.go:195] Run: ls
	I0814 16:33:01.683354   36836 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:01.689157   36836 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:01.689178   36836 status.go:422] ha-597780-m03 apiserver status = Running (err=<nil>)
	I0814 16:33:01.689190   36836 status.go:257] ha-597780-m03 status: &{Name:ha-597780-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:01.689213   36836 status.go:255] checking status of ha-597780-m04 ...
	I0814 16:33:01.689602   36836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:01.689642   36836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:01.704608   36836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0814 16:33:01.705003   36836 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:01.705518   36836 main.go:141] libmachine: Using API Version  1
	I0814 16:33:01.705536   36836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:01.705873   36836 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:01.706086   36836 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:33:01.707662   36836 status.go:330] ha-597780-m04 host status = "Running" (err=<nil>)
	I0814 16:33:01.707676   36836 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:01.707974   36836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:01.708011   36836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:01.722649   36836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43733
	I0814 16:33:01.722949   36836 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:01.723376   36836 main.go:141] libmachine: Using API Version  1
	I0814 16:33:01.723398   36836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:01.723712   36836 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:01.723904   36836 main.go:141] libmachine: (ha-597780-m04) Calling .GetIP
	I0814 16:33:01.726562   36836 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:01.726980   36836 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:01.727020   36836 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:01.727138   36836 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:01.727458   36836 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:01.727490   36836 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:01.741443   36836 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37323
	I0814 16:33:01.741903   36836 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:01.742365   36836 main.go:141] libmachine: Using API Version  1
	I0814 16:33:01.742383   36836 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:01.742664   36836 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:01.742820   36836 main.go:141] libmachine: (ha-597780-m04) Calling .DriverName
	I0814 16:33:01.743004   36836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:01.743023   36836 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHHostname
	I0814 16:33:01.745272   36836 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:01.745638   36836 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:01.745663   36836 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:01.745776   36836 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHPort
	I0814 16:33:01.745947   36836 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHKeyPath
	I0814 16:33:01.746086   36836 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHUsername
	I0814 16:33:01.746222   36836 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m04/id_rsa Username:docker}
	I0814 16:33:01.825944   36836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:01.838919   36836 status.go:257] ha-597780-m04 status: &{Name:ha-597780-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0814 16:33:02.588571   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr: exit status 3 (4.881304896s)

                                                
                                                
-- stdout --
	ha-597780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-597780-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:33:03.157285   36937 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:33:03.157564   36937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:03.157574   36937 out.go:304] Setting ErrFile to fd 2...
	I0814 16:33:03.157578   36937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:03.157774   36937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:33:03.157931   36937 out.go:298] Setting JSON to false
	I0814 16:33:03.157956   36937 mustload.go:65] Loading cluster: ha-597780
	I0814 16:33:03.158071   36937 notify.go:220] Checking for updates...
	I0814 16:33:03.158297   36937 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:33:03.158310   36937 status.go:255] checking status of ha-597780 ...
	I0814 16:33:03.158668   36937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:03.158740   36937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:03.176272   36937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41677
	I0814 16:33:03.176717   36937 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:03.177271   36937 main.go:141] libmachine: Using API Version  1
	I0814 16:33:03.177304   36937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:03.177688   36937 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:03.178008   36937 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:33:03.179624   36937 status.go:330] ha-597780 host status = "Running" (err=<nil>)
	I0814 16:33:03.179638   36937 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:03.180030   36937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:03.180095   36937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:03.195555   36937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0814 16:33:03.195944   36937 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:03.196396   36937 main.go:141] libmachine: Using API Version  1
	I0814 16:33:03.196419   36937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:03.196674   36937 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:03.196868   36937 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:33:03.199286   36937 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:03.199755   36937 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:03.199789   36937 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:03.199958   36937 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:03.200264   36937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:03.200295   36937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:03.215747   36937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39559
	I0814 16:33:03.216127   36937 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:03.216657   36937 main.go:141] libmachine: Using API Version  1
	I0814 16:33:03.216680   36937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:03.216962   36937 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:03.217127   36937 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:33:03.217323   36937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:03.217342   36937 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:33:03.219984   36937 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:03.220404   36937 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:03.220428   36937 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:03.220578   36937 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:33:03.220702   36937 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:33:03.220813   36937 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:33:03.220894   36937 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:33:03.298202   36937 ssh_runner.go:195] Run: systemctl --version
	I0814 16:33:03.303733   36937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:03.319020   36937 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:03.319052   36937 api_server.go:166] Checking apiserver status ...
	I0814 16:33:03.319097   36937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:03.331560   36937 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	W0814 16:33:03.339896   36937 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:03.339954   36937 ssh_runner.go:195] Run: ls
	I0814 16:33:03.344328   36937 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:03.348359   36937 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:03.348388   36937 status.go:422] ha-597780 apiserver status = Running (err=<nil>)
	I0814 16:33:03.348398   36937 status.go:257] ha-597780 status: &{Name:ha-597780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:03.348414   36937 status.go:255] checking status of ha-597780-m02 ...
	I0814 16:33:03.348684   36937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:03.348714   36937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:03.364267   36937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44249
	I0814 16:33:03.364724   36937 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:03.365155   36937 main.go:141] libmachine: Using API Version  1
	I0814 16:33:03.365177   36937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:03.365467   36937 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:03.365672   36937 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:33:03.367347   36937 status.go:330] ha-597780-m02 host status = "Running" (err=<nil>)
	I0814 16:33:03.367365   36937 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:33:03.367648   36937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:03.367679   36937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:03.383423   36937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0814 16:33:03.383847   36937 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:03.384306   36937 main.go:141] libmachine: Using API Version  1
	I0814 16:33:03.384324   36937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:03.384649   36937 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:03.384813   36937 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:33:03.387684   36937 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:03.388117   36937 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:33:03.388139   36937 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:03.388306   36937 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:33:03.388634   36937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:03.388672   36937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:03.404287   36937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41427
	I0814 16:33:03.404685   36937 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:03.405163   36937 main.go:141] libmachine: Using API Version  1
	I0814 16:33:03.405188   36937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:03.405491   36937 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:03.405706   36937 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:33:03.405885   36937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:03.405908   36937 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:33:03.408760   36937 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:03.409106   36937 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:33:03.409143   36937 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:03.409350   36937 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:33:03.409510   36937 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:33:03.409683   36937 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:33:03.409799   36937 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	W0814 16:33:04.571649   36937 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:04.571723   36937 retry.go:31] will retry after 275.35536ms: dial tcp 192.168.39.225:22: connect: no route to host
	W0814 16:33:07.643629   36937 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.225:22: connect: no route to host
	W0814 16:33:07.643701   36937 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	E0814 16:33:07.643714   36937 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:07.643720   36937 status.go:257] ha-597780-m02 status: &{Name:ha-597780-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0814 16:33:07.643743   36937 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:07.643752   36937 status.go:255] checking status of ha-597780-m03 ...
	I0814 16:33:07.644050   36937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:07.644092   36937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:07.660584   36937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0814 16:33:07.661065   36937 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:07.661632   36937 main.go:141] libmachine: Using API Version  1
	I0814 16:33:07.661658   36937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:07.661949   36937 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:07.662143   36937 main.go:141] libmachine: (ha-597780-m03) Calling .GetState
	I0814 16:33:07.663607   36937 status.go:330] ha-597780-m03 host status = "Running" (err=<nil>)
	I0814 16:33:07.663620   36937 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:07.663897   36937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:07.663927   36937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:07.678637   36937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I0814 16:33:07.679028   36937 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:07.679462   36937 main.go:141] libmachine: Using API Version  1
	I0814 16:33:07.679481   36937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:07.679847   36937 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:07.680046   36937 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:33:07.682779   36937 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:07.683189   36937 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:07.683215   36937 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:07.683376   36937 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:07.683684   36937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:07.683717   36937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:07.698961   36937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0814 16:33:07.699431   36937 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:07.699851   36937 main.go:141] libmachine: Using API Version  1
	I0814 16:33:07.699874   36937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:07.700145   36937 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:07.700340   36937 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:33:07.700509   36937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:07.700539   36937 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:33:07.703315   36937 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:07.703712   36937 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:07.703735   36937 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:07.703816   36937 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:33:07.703963   36937 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:33:07.704201   36937 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:33:07.704366   36937 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:33:07.786600   36937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:07.800891   36937 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:07.800920   36937 api_server.go:166] Checking apiserver status ...
	I0814 16:33:07.800953   36937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:07.817971   36937 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup
	W0814 16:33:07.826981   36937 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:07.827039   36937 ssh_runner.go:195] Run: ls
	I0814 16:33:07.831147   36937 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:07.835458   36937 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:07.835485   36937 status.go:422] ha-597780-m03 apiserver status = Running (err=<nil>)
	I0814 16:33:07.835496   36937 status.go:257] ha-597780-m03 status: &{Name:ha-597780-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:07.835515   36937 status.go:255] checking status of ha-597780-m04 ...
	I0814 16:33:07.835822   36937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:07.835863   36937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:07.850635   36937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
	I0814 16:33:07.851027   36937 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:07.851505   36937 main.go:141] libmachine: Using API Version  1
	I0814 16:33:07.851525   36937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:07.851857   36937 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:07.852054   36937 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:33:07.853764   36937 status.go:330] ha-597780-m04 host status = "Running" (err=<nil>)
	I0814 16:33:07.853782   36937 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:07.854190   36937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:07.854234   36937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:07.869922   36937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34347
	I0814 16:33:07.870311   36937 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:07.870831   36937 main.go:141] libmachine: Using API Version  1
	I0814 16:33:07.870853   36937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:07.871159   36937 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:07.871433   36937 main.go:141] libmachine: (ha-597780-m04) Calling .GetIP
	I0814 16:33:07.874173   36937 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:07.874606   36937 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:07.874628   36937 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:07.874818   36937 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:07.875093   36937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:07.875126   36937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:07.889921   36937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I0814 16:33:07.890347   36937 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:07.890847   36937 main.go:141] libmachine: Using API Version  1
	I0814 16:33:07.890864   36937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:07.891139   36937 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:07.891307   36937 main.go:141] libmachine: (ha-597780-m04) Calling .DriverName
	I0814 16:33:07.891574   36937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:07.891613   36937 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHHostname
	I0814 16:33:07.894440   36937 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:07.894881   36937 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:07.894901   36937 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:07.895048   36937 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHPort
	I0814 16:33:07.895222   36937 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHKeyPath
	I0814 16:33:07.895405   36937 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHUsername
	I0814 16:33:07.895549   36937 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m04/id_rsa Username:docker}
	I0814 16:33:07.978123   36937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:07.994032   36937 status.go:257] ha-597780-m04 status: &{Name:ha-597780-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr: exit status 3 (4.480639187s)

                                                
                                                
-- stdout --
	ha-597780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-597780-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:33:10.011307   37038 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:33:10.011465   37038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:10.011476   37038 out.go:304] Setting ErrFile to fd 2...
	I0814 16:33:10.011482   37038 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:10.011676   37038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:33:10.011883   37038 out.go:298] Setting JSON to false
	I0814 16:33:10.011914   37038 mustload.go:65] Loading cluster: ha-597780
	I0814 16:33:10.012009   37038 notify.go:220] Checking for updates...
	I0814 16:33:10.012362   37038 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:33:10.012378   37038 status.go:255] checking status of ha-597780 ...
	I0814 16:33:10.012801   37038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:10.012840   37038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:10.028858   37038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41713
	I0814 16:33:10.029400   37038 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:10.030051   37038 main.go:141] libmachine: Using API Version  1
	I0814 16:33:10.030090   37038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:10.030490   37038 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:10.030684   37038 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:33:10.032436   37038 status.go:330] ha-597780 host status = "Running" (err=<nil>)
	I0814 16:33:10.032459   37038 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:10.032750   37038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:10.032791   37038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:10.048080   37038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45185
	I0814 16:33:10.048495   37038 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:10.048977   37038 main.go:141] libmachine: Using API Version  1
	I0814 16:33:10.048998   37038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:10.049278   37038 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:10.049479   37038 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:33:10.052150   37038 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:10.052519   37038 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:10.052543   37038 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:10.052681   37038 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:10.052955   37038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:10.053000   37038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:10.067441   37038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40259
	I0814 16:33:10.067861   37038 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:10.068299   37038 main.go:141] libmachine: Using API Version  1
	I0814 16:33:10.068326   37038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:10.068591   37038 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:10.068767   37038 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:33:10.068947   37038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:10.068968   37038 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:33:10.071613   37038 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:10.072000   37038 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:10.072037   37038 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:10.072179   37038 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:33:10.072344   37038 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:33:10.072478   37038 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:33:10.072653   37038 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:33:10.150156   37038 ssh_runner.go:195] Run: systemctl --version
	I0814 16:33:10.155635   37038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:10.170084   37038 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:10.170120   37038 api_server.go:166] Checking apiserver status ...
	I0814 16:33:10.170175   37038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:10.184021   37038 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	W0814 16:33:10.193158   37038 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:10.193216   37038 ssh_runner.go:195] Run: ls
	I0814 16:33:10.197011   37038 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:10.201075   37038 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:10.201097   37038 status.go:422] ha-597780 apiserver status = Running (err=<nil>)
	I0814 16:33:10.201109   37038 status.go:257] ha-597780 status: &{Name:ha-597780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:10.201124   37038 status.go:255] checking status of ha-597780-m02 ...
	I0814 16:33:10.201511   37038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:10.201554   37038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:10.217503   37038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44369
	I0814 16:33:10.217895   37038 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:10.218312   37038 main.go:141] libmachine: Using API Version  1
	I0814 16:33:10.218335   37038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:10.218632   37038 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:10.218798   37038 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:33:10.220236   37038 status.go:330] ha-597780-m02 host status = "Running" (err=<nil>)
	I0814 16:33:10.220252   37038 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:33:10.220533   37038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:10.220571   37038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:10.234638   37038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35451
	I0814 16:33:10.235057   37038 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:10.235586   37038 main.go:141] libmachine: Using API Version  1
	I0814 16:33:10.235612   37038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:10.235891   37038 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:10.236065   37038 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:33:10.238734   37038 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:10.239155   37038 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:33:10.239186   37038 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:10.239345   37038 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:33:10.239638   37038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:10.239672   37038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:10.254170   37038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42537
	I0814 16:33:10.254524   37038 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:10.254946   37038 main.go:141] libmachine: Using API Version  1
	I0814 16:33:10.254966   37038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:10.255280   37038 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:10.255485   37038 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:33:10.255658   37038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:10.255680   37038 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:33:10.258205   37038 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:10.258587   37038 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:33:10.258609   37038 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:10.258724   37038 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:33:10.258887   37038 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:33:10.259025   37038 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:33:10.259128   37038 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	W0814 16:33:10.715525   37038 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:10.715582   37038 retry.go:31] will retry after 311.604604ms: dial tcp 192.168.39.225:22: connect: no route to host
	W0814 16:33:14.107615   37038 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.225:22: connect: no route to host
	W0814 16:33:14.107696   37038 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	E0814 16:33:14.107709   37038 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:14.107717   37038 status.go:257] ha-597780-m02 status: &{Name:ha-597780-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0814 16:33:14.107744   37038 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:14.107752   37038 status.go:255] checking status of ha-597780-m03 ...
	I0814 16:33:14.108076   37038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:14.108116   37038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:14.122770   37038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I0814 16:33:14.123170   37038 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:14.123731   37038 main.go:141] libmachine: Using API Version  1
	I0814 16:33:14.123758   37038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:14.124148   37038 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:14.124379   37038 main.go:141] libmachine: (ha-597780-m03) Calling .GetState
	I0814 16:33:14.126190   37038 status.go:330] ha-597780-m03 host status = "Running" (err=<nil>)
	I0814 16:33:14.126207   37038 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:14.126506   37038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:14.126538   37038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:14.141036   37038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0814 16:33:14.141401   37038 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:14.141795   37038 main.go:141] libmachine: Using API Version  1
	I0814 16:33:14.141819   37038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:14.142160   37038 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:14.142352   37038 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:33:14.145097   37038 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:14.145585   37038 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:14.145613   37038 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:14.145766   37038 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:14.146084   37038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:14.146125   37038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:14.161450   37038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39643
	I0814 16:33:14.161837   37038 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:14.162299   37038 main.go:141] libmachine: Using API Version  1
	I0814 16:33:14.162318   37038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:14.162638   37038 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:14.162814   37038 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:33:14.163058   37038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:14.163107   37038 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:33:14.165898   37038 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:14.166383   37038 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:14.166413   37038 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:14.166464   37038 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:33:14.166682   37038 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:33:14.166855   37038 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:33:14.167054   37038 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:33:14.250635   37038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:14.265197   37038 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:14.265224   37038 api_server.go:166] Checking apiserver status ...
	I0814 16:33:14.265260   37038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:14.278519   37038 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup
	W0814 16:33:14.287855   37038 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:14.287909   37038 ssh_runner.go:195] Run: ls
	I0814 16:33:14.291867   37038 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:14.297177   37038 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:14.297201   37038 status.go:422] ha-597780-m03 apiserver status = Running (err=<nil>)
	I0814 16:33:14.297210   37038 status.go:257] ha-597780-m03 status: &{Name:ha-597780-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:14.297227   37038 status.go:255] checking status of ha-597780-m04 ...
	I0814 16:33:14.297512   37038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:14.297557   37038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:14.312248   37038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44185
	I0814 16:33:14.312654   37038 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:14.313078   37038 main.go:141] libmachine: Using API Version  1
	I0814 16:33:14.313098   37038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:14.313425   37038 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:14.313589   37038 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:33:14.315143   37038 status.go:330] ha-597780-m04 host status = "Running" (err=<nil>)
	I0814 16:33:14.315156   37038 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:14.315523   37038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:14.315561   37038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:14.329671   37038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38839
	I0814 16:33:14.330027   37038 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:14.330482   37038 main.go:141] libmachine: Using API Version  1
	I0814 16:33:14.330505   37038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:14.330805   37038 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:14.330988   37038 main.go:141] libmachine: (ha-597780-m04) Calling .GetIP
	I0814 16:33:14.333635   37038 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:14.334033   37038 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:14.334058   37038 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:14.334206   37038 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:14.334489   37038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:14.334520   37038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:14.349080   37038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36367
	I0814 16:33:14.349459   37038 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:14.349909   37038 main.go:141] libmachine: Using API Version  1
	I0814 16:33:14.349930   37038 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:14.350200   37038 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:14.350430   37038 main.go:141] libmachine: (ha-597780-m04) Calling .DriverName
	I0814 16:33:14.350620   37038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:14.350639   37038 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHHostname
	I0814 16:33:14.353281   37038 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:14.353647   37038 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:14.353677   37038 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:14.353853   37038 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHPort
	I0814 16:33:14.354015   37038 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHKeyPath
	I0814 16:33:14.354255   37038 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHUsername
	I0814 16:33:14.354380   37038 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m04/id_rsa Username:docker}
	I0814 16:33:14.433968   37038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:14.447752   37038 status.go:257] ha-597780-m04 status: &{Name:ha-597780-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr: exit status 3 (4.852143649s)

                                                
                                                
-- stdout --
	ha-597780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-597780-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:33:16.148732   37154 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:33:16.149000   37154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:16.149011   37154 out.go:304] Setting ErrFile to fd 2...
	I0814 16:33:16.149017   37154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:16.149282   37154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:33:16.149471   37154 out.go:298] Setting JSON to false
	I0814 16:33:16.149500   37154 mustload.go:65] Loading cluster: ha-597780
	I0814 16:33:16.149627   37154 notify.go:220] Checking for updates...
	I0814 16:33:16.149876   37154 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:33:16.149894   37154 status.go:255] checking status of ha-597780 ...
	I0814 16:33:16.150258   37154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:16.150356   37154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:16.165482   37154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39819
	I0814 16:33:16.165958   37154 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:16.166817   37154 main.go:141] libmachine: Using API Version  1
	I0814 16:33:16.166854   37154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:16.167152   37154 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:16.167311   37154 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:33:16.168992   37154 status.go:330] ha-597780 host status = "Running" (err=<nil>)
	I0814 16:33:16.169005   37154 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:16.169278   37154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:16.169325   37154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:16.184697   37154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40701
	I0814 16:33:16.185163   37154 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:16.185607   37154 main.go:141] libmachine: Using API Version  1
	I0814 16:33:16.185629   37154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:16.185914   37154 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:16.186056   37154 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:33:16.188967   37154 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:16.189490   37154 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:16.189521   37154 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:16.189657   37154 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:16.189986   37154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:16.190030   37154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:16.205110   37154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I0814 16:33:16.205546   37154 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:16.206006   37154 main.go:141] libmachine: Using API Version  1
	I0814 16:33:16.206031   37154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:16.206320   37154 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:16.206496   37154 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:33:16.206735   37154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:16.206765   37154 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:33:16.209570   37154 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:16.210003   37154 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:16.210025   37154 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:16.210215   37154 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:33:16.210385   37154 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:33:16.210561   37154 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:33:16.210713   37154 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:33:16.286440   37154 ssh_runner.go:195] Run: systemctl --version
	I0814 16:33:16.292050   37154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:16.306152   37154 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:16.306187   37154 api_server.go:166] Checking apiserver status ...
	I0814 16:33:16.306225   37154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:16.319731   37154 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	W0814 16:33:16.329219   37154 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:16.329275   37154 ssh_runner.go:195] Run: ls
	I0814 16:33:16.333360   37154 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:16.337532   37154 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:16.337561   37154 status.go:422] ha-597780 apiserver status = Running (err=<nil>)
	I0814 16:33:16.337574   37154 status.go:257] ha-597780 status: &{Name:ha-597780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:16.337598   37154 status.go:255] checking status of ha-597780-m02 ...
	I0814 16:33:16.337890   37154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:16.337927   37154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:16.352794   37154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39823
	I0814 16:33:16.353188   37154 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:16.353658   37154 main.go:141] libmachine: Using API Version  1
	I0814 16:33:16.353678   37154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:16.354040   37154 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:16.354285   37154 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:33:16.355841   37154 status.go:330] ha-597780-m02 host status = "Running" (err=<nil>)
	I0814 16:33:16.355857   37154 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:33:16.356142   37154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:16.356181   37154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:16.371650   37154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36763
	I0814 16:33:16.372044   37154 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:16.372458   37154 main.go:141] libmachine: Using API Version  1
	I0814 16:33:16.372478   37154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:16.372801   37154 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:16.372981   37154 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:33:16.376093   37154 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:16.376617   37154 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:33:16.376643   37154 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:16.376781   37154 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:33:16.377086   37154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:16.377126   37154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:16.393463   37154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45307
	I0814 16:33:16.393897   37154 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:16.394351   37154 main.go:141] libmachine: Using API Version  1
	I0814 16:33:16.394366   37154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:16.394677   37154 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:16.394857   37154 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:33:16.395063   37154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:16.395082   37154 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:33:16.397796   37154 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:16.398220   37154 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:33:16.398247   37154 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:16.398338   37154 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:33:16.398500   37154 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:33:16.398632   37154 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:33:16.398754   37154 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	W0814 16:33:17.179551   37154 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:17.179610   37154 retry.go:31] will retry after 349.467369ms: dial tcp 192.168.39.225:22: connect: no route to host
	W0814 16:33:20.603587   37154 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.225:22: connect: no route to host
	W0814 16:33:20.603663   37154 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	E0814 16:33:20.603677   37154 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:20.603687   37154 status.go:257] ha-597780-m02 status: &{Name:ha-597780-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0814 16:33:20.603716   37154 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:20.603723   37154 status.go:255] checking status of ha-597780-m03 ...
	I0814 16:33:20.604039   37154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:20.604086   37154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:20.620643   37154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I0814 16:33:20.621156   37154 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:20.621754   37154 main.go:141] libmachine: Using API Version  1
	I0814 16:33:20.621783   37154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:20.622146   37154 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:20.622364   37154 main.go:141] libmachine: (ha-597780-m03) Calling .GetState
	I0814 16:33:20.624028   37154 status.go:330] ha-597780-m03 host status = "Running" (err=<nil>)
	I0814 16:33:20.624046   37154 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:20.624445   37154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:20.624492   37154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:20.639377   37154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42755
	I0814 16:33:20.639780   37154 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:20.640174   37154 main.go:141] libmachine: Using API Version  1
	I0814 16:33:20.640195   37154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:20.640532   37154 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:20.640697   37154 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:33:20.643517   37154 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:20.643932   37154 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:20.643957   37154 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:20.644094   37154 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:20.644446   37154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:20.644484   37154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:20.660588   37154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36085
	I0814 16:33:20.661062   37154 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:20.661514   37154 main.go:141] libmachine: Using API Version  1
	I0814 16:33:20.661535   37154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:20.661808   37154 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:20.661975   37154 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:33:20.662157   37154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:20.662179   37154 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:33:20.664936   37154 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:20.665368   37154 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:20.665395   37154 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:20.665539   37154 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:33:20.665682   37154 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:33:20.665819   37154 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:33:20.665939   37154 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:33:20.754913   37154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:20.770490   37154 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:20.770515   37154 api_server.go:166] Checking apiserver status ...
	I0814 16:33:20.770548   37154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:20.783854   37154 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup
	W0814 16:33:20.793314   37154 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:20.793376   37154 ssh_runner.go:195] Run: ls
	I0814 16:33:20.797617   37154 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:20.803469   37154 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:20.803489   37154 status.go:422] ha-597780-m03 apiserver status = Running (err=<nil>)
	I0814 16:33:20.803497   37154 status.go:257] ha-597780-m03 status: &{Name:ha-597780-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:20.803521   37154 status.go:255] checking status of ha-597780-m04 ...
	I0814 16:33:20.803849   37154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:20.803886   37154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:20.818654   37154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44597
	I0814 16:33:20.819049   37154 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:20.819530   37154 main.go:141] libmachine: Using API Version  1
	I0814 16:33:20.819553   37154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:20.819884   37154 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:20.820107   37154 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:33:20.821620   37154 status.go:330] ha-597780-m04 host status = "Running" (err=<nil>)
	I0814 16:33:20.821646   37154 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:20.821947   37154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:20.821982   37154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:20.836866   37154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0814 16:33:20.837220   37154 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:20.837608   37154 main.go:141] libmachine: Using API Version  1
	I0814 16:33:20.837626   37154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:20.837896   37154 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:20.838067   37154 main.go:141] libmachine: (ha-597780-m04) Calling .GetIP
	I0814 16:33:20.840510   37154 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:20.840888   37154 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:20.840939   37154 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:20.841036   37154 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:20.841302   37154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:20.841333   37154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:20.857493   37154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41313
	I0814 16:33:20.857894   37154 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:20.858339   37154 main.go:141] libmachine: Using API Version  1
	I0814 16:33:20.858352   37154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:20.858645   37154 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:20.858811   37154 main.go:141] libmachine: (ha-597780-m04) Calling .DriverName
	I0814 16:33:20.858989   37154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:20.859051   37154 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHHostname
	I0814 16:33:20.861919   37154 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:20.862301   37154 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:20.862333   37154 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:20.862455   37154 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHPort
	I0814 16:33:20.862635   37154 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHKeyPath
	I0814 16:33:20.862780   37154 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHUsername
	I0814 16:33:20.862913   37154 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m04/id_rsa Username:docker}
	I0814 16:33:20.942442   37154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:20.956686   37154 status.go:257] ha-597780-m04 status: &{Name:ha-597780-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr: exit status 3 (4.209418479s)

                                                
                                                
-- stdout --
	ha-597780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-597780-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:33:23.114825   37253 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:33:23.114951   37253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:23.114962   37253 out.go:304] Setting ErrFile to fd 2...
	I0814 16:33:23.114969   37253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:23.115167   37253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:33:23.115418   37253 out.go:298] Setting JSON to false
	I0814 16:33:23.115450   37253 mustload.go:65] Loading cluster: ha-597780
	I0814 16:33:23.115490   37253 notify.go:220] Checking for updates...
	I0814 16:33:23.115800   37253 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:33:23.115814   37253 status.go:255] checking status of ha-597780 ...
	I0814 16:33:23.116141   37253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:23.116199   37253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:23.135877   37253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46541
	I0814 16:33:23.136368   37253 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:23.136896   37253 main.go:141] libmachine: Using API Version  1
	I0814 16:33:23.136945   37253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:23.137315   37253 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:23.137535   37253 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:33:23.139132   37253 status.go:330] ha-597780 host status = "Running" (err=<nil>)
	I0814 16:33:23.139151   37253 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:23.139467   37253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:23.139502   37253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:23.154763   37253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42023
	I0814 16:33:23.155239   37253 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:23.155805   37253 main.go:141] libmachine: Using API Version  1
	I0814 16:33:23.155824   37253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:23.156181   37253 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:23.156367   37253 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:33:23.159347   37253 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:23.159858   37253 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:23.159892   37253 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:23.159980   37253 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:23.160273   37253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:23.160315   37253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:23.175393   37253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45233
	I0814 16:33:23.175943   37253 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:23.176433   37253 main.go:141] libmachine: Using API Version  1
	I0814 16:33:23.176455   37253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:23.176772   37253 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:23.176972   37253 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:33:23.177142   37253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:23.177170   37253 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:33:23.179818   37253 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:23.180264   37253 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:23.180291   37253 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:23.180492   37253 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:33:23.180710   37253 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:33:23.180903   37253 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:33:23.181120   37253 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:33:23.258089   37253 ssh_runner.go:195] Run: systemctl --version
	I0814 16:33:23.263666   37253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:23.277989   37253 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:23.278024   37253 api_server.go:166] Checking apiserver status ...
	I0814 16:33:23.278062   37253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:23.291270   37253 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	W0814 16:33:23.300979   37253 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:23.301046   37253 ssh_runner.go:195] Run: ls
	I0814 16:33:23.304958   37253 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:23.311217   37253 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:23.311251   37253 status.go:422] ha-597780 apiserver status = Running (err=<nil>)
	I0814 16:33:23.311271   37253 status.go:257] ha-597780 status: &{Name:ha-597780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:23.311297   37253 status.go:255] checking status of ha-597780-m02 ...
	I0814 16:33:23.311613   37253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:23.311664   37253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:23.326102   37253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37217
	I0814 16:33:23.326552   37253 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:23.327036   37253 main.go:141] libmachine: Using API Version  1
	I0814 16:33:23.327054   37253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:23.327395   37253 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:23.327556   37253 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:33:23.329145   37253 status.go:330] ha-597780-m02 host status = "Running" (err=<nil>)
	I0814 16:33:23.329162   37253 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:33:23.329435   37253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:23.329471   37253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:23.343912   37253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I0814 16:33:23.344287   37253 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:23.344739   37253 main.go:141] libmachine: Using API Version  1
	I0814 16:33:23.344760   37253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:23.345056   37253 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:23.345240   37253 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:33:23.347810   37253 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:23.348165   37253 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:33:23.348202   37253 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:23.348308   37253 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:33:23.348628   37253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:23.348666   37253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:23.363758   37253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34637
	I0814 16:33:23.364124   37253 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:23.364618   37253 main.go:141] libmachine: Using API Version  1
	I0814 16:33:23.364640   37253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:23.364927   37253 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:23.365129   37253 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:33:23.365313   37253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:23.365335   37253 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:33:23.368310   37253 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:23.368727   37253 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:33:23.368753   37253 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:23.368897   37253 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:33:23.369076   37253 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:33:23.369206   37253 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:33:23.369342   37253 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	W0814 16:33:23.675601   37253 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:23.675661   37253 retry.go:31] will retry after 191.331603ms: dial tcp 192.168.39.225:22: connect: no route to host
	W0814 16:33:26.939603   37253 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.225:22: connect: no route to host
	W0814 16:33:26.939718   37253 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	E0814 16:33:26.939738   37253 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:26.939752   37253 status.go:257] ha-597780-m02 status: &{Name:ha-597780-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0814 16:33:26.939786   37253 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:26.939796   37253 status.go:255] checking status of ha-597780-m03 ...
	I0814 16:33:26.940210   37253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:26.940289   37253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:26.954848   37253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41783
	I0814 16:33:26.955259   37253 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:26.955719   37253 main.go:141] libmachine: Using API Version  1
	I0814 16:33:26.955742   37253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:26.956051   37253 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:26.956258   37253 main.go:141] libmachine: (ha-597780-m03) Calling .GetState
	I0814 16:33:26.957826   37253 status.go:330] ha-597780-m03 host status = "Running" (err=<nil>)
	I0814 16:33:26.957844   37253 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:26.958158   37253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:26.958191   37253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:26.973365   37253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38873
	I0814 16:33:26.973772   37253 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:26.974239   37253 main.go:141] libmachine: Using API Version  1
	I0814 16:33:26.974256   37253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:26.974520   37253 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:26.974695   37253 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:33:26.977191   37253 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:26.977627   37253 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:26.977661   37253 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:26.977700   37253 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:26.977976   37253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:26.978013   37253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:26.992291   37253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36141
	I0814 16:33:26.992705   37253 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:26.993143   37253 main.go:141] libmachine: Using API Version  1
	I0814 16:33:26.993161   37253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:26.993409   37253 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:26.993568   37253 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:33:26.993707   37253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:26.993727   37253 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:33:26.996427   37253 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:26.996831   37253 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:26.996852   37253 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:26.996997   37253 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:33:26.997143   37253 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:33:26.997261   37253 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:33:26.997394   37253 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:33:27.082120   37253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:27.096359   37253 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:27.096388   37253 api_server.go:166] Checking apiserver status ...
	I0814 16:33:27.096426   37253 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:27.111731   37253 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup
	W0814 16:33:27.123075   37253 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:27.123130   37253 ssh_runner.go:195] Run: ls
	I0814 16:33:27.127275   37253 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:27.133509   37253 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:27.133550   37253 status.go:422] ha-597780-m03 apiserver status = Running (err=<nil>)
	I0814 16:33:27.133562   37253 status.go:257] ha-597780-m03 status: &{Name:ha-597780-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:27.133580   37253 status.go:255] checking status of ha-597780-m04 ...
	I0814 16:33:27.133958   37253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:27.134006   37253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:27.148812   37253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44767
	I0814 16:33:27.149315   37253 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:27.149769   37253 main.go:141] libmachine: Using API Version  1
	I0814 16:33:27.149789   37253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:27.150091   37253 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:27.150294   37253 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:33:27.151770   37253 status.go:330] ha-597780-m04 host status = "Running" (err=<nil>)
	I0814 16:33:27.151785   37253 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:27.152075   37253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:27.152108   37253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:27.166852   37253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34979
	I0814 16:33:27.167187   37253 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:27.167634   37253 main.go:141] libmachine: Using API Version  1
	I0814 16:33:27.167652   37253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:27.167925   37253 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:27.168114   37253 main.go:141] libmachine: (ha-597780-m04) Calling .GetIP
	I0814 16:33:27.170760   37253 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:27.171169   37253 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:27.171204   37253 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:27.171338   37253 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:27.171685   37253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:27.171724   37253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:27.186055   37253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33887
	I0814 16:33:27.186372   37253 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:27.186799   37253 main.go:141] libmachine: Using API Version  1
	I0814 16:33:27.186816   37253 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:27.187131   37253 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:27.187320   37253 main.go:141] libmachine: (ha-597780-m04) Calling .DriverName
	I0814 16:33:27.187524   37253 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:27.187546   37253 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHHostname
	I0814 16:33:27.190049   37253 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:27.190476   37253 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:27.190499   37253 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:27.190605   37253 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHPort
	I0814 16:33:27.190767   37253 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHKeyPath
	I0814 16:33:27.190925   37253 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHUsername
	I0814 16:33:27.191046   37253 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m04/id_rsa Username:docker}
	I0814 16:33:27.270081   37253 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:27.283433   37253 status.go:257] ha-597780-m04 status: &{Name:ha-597780-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr: exit status 3 (3.709852895s)

                                                
                                                
-- stdout --
	ha-597780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-597780-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:33:30.401981   37354 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:33:30.402092   37354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:30.402101   37354 out.go:304] Setting ErrFile to fd 2...
	I0814 16:33:30.402105   37354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:30.402276   37354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:33:30.402440   37354 out.go:298] Setting JSON to false
	I0814 16:33:30.402465   37354 mustload.go:65] Loading cluster: ha-597780
	I0814 16:33:30.402587   37354 notify.go:220] Checking for updates...
	I0814 16:33:30.402803   37354 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:33:30.402815   37354 status.go:255] checking status of ha-597780 ...
	I0814 16:33:30.403171   37354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:30.403205   37354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:30.421759   37354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41525
	I0814 16:33:30.422255   37354 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:30.422878   37354 main.go:141] libmachine: Using API Version  1
	I0814 16:33:30.422903   37354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:30.423226   37354 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:30.423451   37354 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:33:30.425209   37354 status.go:330] ha-597780 host status = "Running" (err=<nil>)
	I0814 16:33:30.425227   37354 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:30.425521   37354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:30.425571   37354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:30.440472   37354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45043
	I0814 16:33:30.440898   37354 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:30.441392   37354 main.go:141] libmachine: Using API Version  1
	I0814 16:33:30.441420   37354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:30.441733   37354 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:30.441911   37354 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:33:30.444517   37354 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:30.445011   37354 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:30.445036   37354 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:30.445211   37354 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:30.445570   37354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:30.445620   37354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:30.461043   37354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38455
	I0814 16:33:30.461479   37354 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:30.461924   37354 main.go:141] libmachine: Using API Version  1
	I0814 16:33:30.461944   37354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:30.462227   37354 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:30.462407   37354 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:33:30.462677   37354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:30.462705   37354 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:33:30.465692   37354 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:30.466104   37354 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:30.466137   37354 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:30.466282   37354 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:33:30.466460   37354 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:33:30.466610   37354 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:33:30.466723   37354 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:33:30.547125   37354 ssh_runner.go:195] Run: systemctl --version
	I0814 16:33:30.553447   37354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:30.570774   37354 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:30.570808   37354 api_server.go:166] Checking apiserver status ...
	I0814 16:33:30.570852   37354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:30.589137   37354 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	W0814 16:33:30.598898   37354 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:30.598953   37354 ssh_runner.go:195] Run: ls
	I0814 16:33:30.603301   37354 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:30.609488   37354 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:30.609511   37354 status.go:422] ha-597780 apiserver status = Running (err=<nil>)
	I0814 16:33:30.609521   37354 status.go:257] ha-597780 status: &{Name:ha-597780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:30.609535   37354 status.go:255] checking status of ha-597780-m02 ...
	I0814 16:33:30.609831   37354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:30.609872   37354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:30.624787   37354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39275
	I0814 16:33:30.625261   37354 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:30.625769   37354 main.go:141] libmachine: Using API Version  1
	I0814 16:33:30.625794   37354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:30.626086   37354 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:30.626256   37354 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:33:30.627762   37354 status.go:330] ha-597780-m02 host status = "Running" (err=<nil>)
	I0814 16:33:30.627779   37354 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:33:30.628172   37354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:30.628214   37354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:30.646163   37354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I0814 16:33:30.646608   37354 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:30.647147   37354 main.go:141] libmachine: Using API Version  1
	I0814 16:33:30.647175   37354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:30.647591   37354 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:30.647800   37354 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:33:30.650887   37354 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:30.651437   37354 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:33:30.651469   37354 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:30.651637   37354 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:33:30.651924   37354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:30.651965   37354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:30.668187   37354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40675
	I0814 16:33:30.668654   37354 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:30.669130   37354 main.go:141] libmachine: Using API Version  1
	I0814 16:33:30.669159   37354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:30.669492   37354 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:30.669677   37354 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:33:30.669876   37354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:30.669896   37354 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:33:30.672615   37354 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:30.673048   37354 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:33:30.673078   37354 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:33:30.673186   37354 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:33:30.673382   37354 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:33:30.673536   37354 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:33:30.673673   37354 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	W0814 16:33:33.723634   37354 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.225:22: connect: no route to host
	W0814 16:33:33.723721   37354 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	E0814 16:33:33.723735   37354 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:33.723742   37354 status.go:257] ha-597780-m02 status: &{Name:ha-597780-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0814 16:33:33.723758   37354 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	I0814 16:33:33.723765   37354 status.go:255] checking status of ha-597780-m03 ...
	I0814 16:33:33.724098   37354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:33.724135   37354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:33.738744   37354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I0814 16:33:33.739338   37354 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:33.739820   37354 main.go:141] libmachine: Using API Version  1
	I0814 16:33:33.739846   37354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:33.740158   37354 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:33.740334   37354 main.go:141] libmachine: (ha-597780-m03) Calling .GetState
	I0814 16:33:33.741825   37354 status.go:330] ha-597780-m03 host status = "Running" (err=<nil>)
	I0814 16:33:33.741843   37354 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:33.742221   37354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:33.742263   37354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:33.757228   37354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0814 16:33:33.757638   37354 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:33.758062   37354 main.go:141] libmachine: Using API Version  1
	I0814 16:33:33.758083   37354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:33.758382   37354 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:33.758596   37354 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:33:33.762014   37354 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:33.762507   37354 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:33.762533   37354 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:33.762687   37354 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:33.763026   37354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:33.763066   37354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:33.777740   37354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32939
	I0814 16:33:33.778149   37354 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:33.778579   37354 main.go:141] libmachine: Using API Version  1
	I0814 16:33:33.778601   37354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:33.778869   37354 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:33.779076   37354 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:33:33.779248   37354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:33.779268   37354 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:33:33.781616   37354 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:33.782000   37354 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:33.782031   37354 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:33.782179   37354 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:33:33.782359   37354 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:33:33.782509   37354 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:33:33.782622   37354 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:33:33.867260   37354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:33.882611   37354 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:33.882637   37354 api_server.go:166] Checking apiserver status ...
	I0814 16:33:33.882666   37354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:33.897494   37354 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup
	W0814 16:33:33.907391   37354 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:33.907448   37354 ssh_runner.go:195] Run: ls
	I0814 16:33:33.911665   37354 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:33.915681   37354 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:33.915703   37354 status.go:422] ha-597780-m03 apiserver status = Running (err=<nil>)
	I0814 16:33:33.915714   37354 status.go:257] ha-597780-m03 status: &{Name:ha-597780-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:33.915732   37354 status.go:255] checking status of ha-597780-m04 ...
	I0814 16:33:33.916056   37354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:33.916096   37354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:33.931687   37354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40897
	I0814 16:33:33.932142   37354 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:33.932605   37354 main.go:141] libmachine: Using API Version  1
	I0814 16:33:33.932626   37354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:33.932889   37354 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:33.933059   37354 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:33:33.934529   37354 status.go:330] ha-597780-m04 host status = "Running" (err=<nil>)
	I0814 16:33:33.934551   37354 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:33.934901   37354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:33.934941   37354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:33.949284   37354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43451
	I0814 16:33:33.949691   37354 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:33.950165   37354 main.go:141] libmachine: Using API Version  1
	I0814 16:33:33.950185   37354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:33.950445   37354 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:33.950736   37354 main.go:141] libmachine: (ha-597780-m04) Calling .GetIP
	I0814 16:33:33.953344   37354 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:33.953772   37354 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:33.953806   37354 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:33.953917   37354 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:33.954198   37354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:33.954234   37354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:33.968734   37354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
	I0814 16:33:33.969068   37354 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:33.969504   37354 main.go:141] libmachine: Using API Version  1
	I0814 16:33:33.969524   37354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:33.969805   37354 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:33.969984   37354 main.go:141] libmachine: (ha-597780-m04) Calling .DriverName
	I0814 16:33:33.970137   37354 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:33.970157   37354 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHHostname
	I0814 16:33:33.972697   37354 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:33.973121   37354 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:33.973140   37354 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:33.973375   37354 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHPort
	I0814 16:33:33.973519   37354 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHKeyPath
	I0814 16:33:33.973666   37354 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHUsername
	I0814 16:33:33.973795   37354 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m04/id_rsa Username:docker}
	I0814 16:33:34.056129   37354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:34.069747   37354 status.go:257] ha-597780-m04 status: &{Name:ha-597780-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr: exit status 7 (603.964991ms)

                                                
                                                
-- stdout --
	ha-597780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-597780-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:33:44.116461   37506 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:33:44.116714   37506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:44.116724   37506 out.go:304] Setting ErrFile to fd 2...
	I0814 16:33:44.116728   37506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:44.116913   37506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:33:44.117080   37506 out.go:298] Setting JSON to false
	I0814 16:33:44.117106   37506 mustload.go:65] Loading cluster: ha-597780
	I0814 16:33:44.117150   37506 notify.go:220] Checking for updates...
	I0814 16:33:44.117917   37506 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:33:44.117984   37506 status.go:255] checking status of ha-597780 ...
	I0814 16:33:44.118878   37506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:44.119068   37506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:44.134185   37506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0814 16:33:44.134618   37506 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:44.135116   37506 main.go:141] libmachine: Using API Version  1
	I0814 16:33:44.135135   37506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:44.135573   37506 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:44.135794   37506 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:33:44.137534   37506 status.go:330] ha-597780 host status = "Running" (err=<nil>)
	I0814 16:33:44.137550   37506 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:44.137838   37506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:44.137877   37506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:44.153108   37506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38125
	I0814 16:33:44.153460   37506 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:44.153947   37506 main.go:141] libmachine: Using API Version  1
	I0814 16:33:44.153974   37506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:44.154303   37506 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:44.154502   37506 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:33:44.157372   37506 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:44.157843   37506 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:44.157871   37506 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:44.157973   37506 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:44.158363   37506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:44.158407   37506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:44.172984   37506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42015
	I0814 16:33:44.173402   37506 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:44.173837   37506 main.go:141] libmachine: Using API Version  1
	I0814 16:33:44.173870   37506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:44.174219   37506 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:44.174401   37506 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:33:44.174612   37506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:44.174633   37506 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:33:44.177283   37506 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:44.177621   37506 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:44.177651   37506 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:44.177764   37506 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:33:44.177910   37506 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:33:44.178041   37506 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:33:44.178148   37506 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:33:44.259463   37506 ssh_runner.go:195] Run: systemctl --version
	I0814 16:33:44.266366   37506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:44.282229   37506 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:44.282255   37506 api_server.go:166] Checking apiserver status ...
	I0814 16:33:44.282282   37506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:44.296870   37506 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	W0814 16:33:44.307448   37506 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:44.307504   37506 ssh_runner.go:195] Run: ls
	I0814 16:33:44.311348   37506 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:44.317338   37506 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:44.317360   37506 status.go:422] ha-597780 apiserver status = Running (err=<nil>)
	I0814 16:33:44.317370   37506 status.go:257] ha-597780 status: &{Name:ha-597780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:44.317409   37506 status.go:255] checking status of ha-597780-m02 ...
	I0814 16:33:44.317800   37506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:44.317842   37506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:44.332883   37506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41447
	I0814 16:33:44.333340   37506 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:44.333814   37506 main.go:141] libmachine: Using API Version  1
	I0814 16:33:44.333834   37506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:44.334166   37506 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:44.334321   37506 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:33:44.335707   37506 status.go:330] ha-597780-m02 host status = "Stopped" (err=<nil>)
	I0814 16:33:44.335722   37506 status.go:343] host is not running, skipping remaining checks
	I0814 16:33:44.335730   37506 status.go:257] ha-597780-m02 status: &{Name:ha-597780-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:44.335749   37506 status.go:255] checking status of ha-597780-m03 ...
	I0814 16:33:44.336076   37506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:44.336126   37506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:44.350048   37506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41573
	I0814 16:33:44.350401   37506 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:44.350831   37506 main.go:141] libmachine: Using API Version  1
	I0814 16:33:44.350849   37506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:44.351122   37506 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:44.351316   37506 main.go:141] libmachine: (ha-597780-m03) Calling .GetState
	I0814 16:33:44.352695   37506 status.go:330] ha-597780-m03 host status = "Running" (err=<nil>)
	I0814 16:33:44.352711   37506 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:44.353075   37506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:44.353113   37506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:44.367311   37506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I0814 16:33:44.367678   37506 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:44.368173   37506 main.go:141] libmachine: Using API Version  1
	I0814 16:33:44.368191   37506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:44.368512   37506 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:44.368675   37506 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:33:44.371409   37506 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:44.371801   37506 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:44.371828   37506 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:44.371951   37506 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:44.372258   37506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:44.372299   37506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:44.388113   37506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36997
	I0814 16:33:44.388517   37506 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:44.388985   37506 main.go:141] libmachine: Using API Version  1
	I0814 16:33:44.389004   37506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:44.389315   37506 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:44.389534   37506 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:33:44.389710   37506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:44.389728   37506 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:33:44.392650   37506 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:44.393029   37506 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:44.393045   37506 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:44.393197   37506 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:33:44.393363   37506 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:33:44.393479   37506 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:33:44.393605   37506 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:33:44.475387   37506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:44.491784   37506 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:44.491823   37506 api_server.go:166] Checking apiserver status ...
	I0814 16:33:44.491873   37506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:44.505938   37506 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup
	W0814 16:33:44.515855   37506 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:44.515912   37506 ssh_runner.go:195] Run: ls
	I0814 16:33:44.520303   37506 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:44.524599   37506 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:44.524618   37506 status.go:422] ha-597780-m03 apiserver status = Running (err=<nil>)
	I0814 16:33:44.524626   37506 status.go:257] ha-597780-m03 status: &{Name:ha-597780-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:44.524640   37506 status.go:255] checking status of ha-597780-m04 ...
	I0814 16:33:44.524934   37506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:44.524964   37506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:44.539714   37506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37377
	I0814 16:33:44.540164   37506 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:44.540625   37506 main.go:141] libmachine: Using API Version  1
	I0814 16:33:44.540669   37506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:44.540977   37506 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:44.541130   37506 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:33:44.542619   37506 status.go:330] ha-597780-m04 host status = "Running" (err=<nil>)
	I0814 16:33:44.542636   37506 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:44.542982   37506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:44.543027   37506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:44.557304   37506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34565
	I0814 16:33:44.557708   37506 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:44.558133   37506 main.go:141] libmachine: Using API Version  1
	I0814 16:33:44.558151   37506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:44.558476   37506 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:44.558663   37506 main.go:141] libmachine: (ha-597780-m04) Calling .GetIP
	I0814 16:33:44.561259   37506 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:44.561645   37506 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:44.561670   37506 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:44.561817   37506 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:44.562196   37506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:44.562240   37506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:44.576620   37506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36119
	I0814 16:33:44.577027   37506 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:44.577520   37506 main.go:141] libmachine: Using API Version  1
	I0814 16:33:44.577544   37506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:44.577869   37506 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:44.578144   37506 main.go:141] libmachine: (ha-597780-m04) Calling .DriverName
	I0814 16:33:44.578359   37506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:44.578380   37506 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHHostname
	I0814 16:33:44.580889   37506 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:44.581281   37506 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:44.581313   37506 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:44.581418   37506 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHPort
	I0814 16:33:44.581571   37506 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHKeyPath
	I0814 16:33:44.581708   37506 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHUsername
	I0814 16:33:44.581802   37506 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m04/id_rsa Username:docker}
	I0814 16:33:44.662442   37506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:44.677955   37506 status.go:257] ha-597780-m04 status: &{Name:ha-597780-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr: exit status 7 (603.829214ms)

                                                
                                                
-- stdout --
	ha-597780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-597780-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:33:54.951472   37610 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:33:54.951738   37610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:54.951748   37610 out.go:304] Setting ErrFile to fd 2...
	I0814 16:33:54.951752   37610 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:54.951913   37610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:33:54.952088   37610 out.go:298] Setting JSON to false
	I0814 16:33:54.952114   37610 mustload.go:65] Loading cluster: ha-597780
	I0814 16:33:54.952215   37610 notify.go:220] Checking for updates...
	I0814 16:33:54.952463   37610 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:33:54.952476   37610 status.go:255] checking status of ha-597780 ...
	I0814 16:33:54.952853   37610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:54.952912   37610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:54.971299   37610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I0814 16:33:54.971848   37610 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:54.972503   37610 main.go:141] libmachine: Using API Version  1
	I0814 16:33:54.972530   37610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:54.972954   37610 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:54.973132   37610 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:33:54.975095   37610 status.go:330] ha-597780 host status = "Running" (err=<nil>)
	I0814 16:33:54.975113   37610 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:54.975497   37610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:54.975565   37610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:54.991670   37610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46785
	I0814 16:33:54.992209   37610 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:54.992673   37610 main.go:141] libmachine: Using API Version  1
	I0814 16:33:54.992689   37610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:54.993041   37610 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:54.993210   37610 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:33:54.996580   37610 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:54.997142   37610 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:54.997185   37610 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:54.997365   37610 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:33:54.997698   37610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:54.997742   37610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:55.012489   37610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0814 16:33:55.012881   37610 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:55.013313   37610 main.go:141] libmachine: Using API Version  1
	I0814 16:33:55.013334   37610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:55.013731   37610 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:55.013913   37610 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:33:55.014101   37610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:55.014121   37610 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:33:55.016840   37610 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:55.017270   37610 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:33:55.017291   37610 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:33:55.017395   37610 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:33:55.017565   37610 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:33:55.017704   37610 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:33:55.017854   37610 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:33:55.094990   37610 ssh_runner.go:195] Run: systemctl --version
	I0814 16:33:55.101630   37610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:55.116462   37610 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:55.116499   37610 api_server.go:166] Checking apiserver status ...
	I0814 16:33:55.116561   37610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:55.132414   37610 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup
	W0814 16:33:55.143093   37610 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1091/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:55.143153   37610 ssh_runner.go:195] Run: ls
	I0814 16:33:55.148463   37610 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:55.152950   37610 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:55.152978   37610 status.go:422] ha-597780 apiserver status = Running (err=<nil>)
	I0814 16:33:55.152991   37610 status.go:257] ha-597780 status: &{Name:ha-597780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:55.153014   37610 status.go:255] checking status of ha-597780-m02 ...
	I0814 16:33:55.153305   37610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:55.153334   37610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:55.167921   37610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0814 16:33:55.168358   37610 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:55.168769   37610 main.go:141] libmachine: Using API Version  1
	I0814 16:33:55.168791   37610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:55.169113   37610 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:55.169362   37610 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:33:55.170925   37610 status.go:330] ha-597780-m02 host status = "Stopped" (err=<nil>)
	I0814 16:33:55.170935   37610 status.go:343] host is not running, skipping remaining checks
	I0814 16:33:55.170942   37610 status.go:257] ha-597780-m02 status: &{Name:ha-597780-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:55.170965   37610 status.go:255] checking status of ha-597780-m03 ...
	I0814 16:33:55.171243   37610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:55.171279   37610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:55.186014   37610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33747
	I0814 16:33:55.186503   37610 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:55.187015   37610 main.go:141] libmachine: Using API Version  1
	I0814 16:33:55.187032   37610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:55.187347   37610 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:55.187532   37610 main.go:141] libmachine: (ha-597780-m03) Calling .GetState
	I0814 16:33:55.189250   37610 status.go:330] ha-597780-m03 host status = "Running" (err=<nil>)
	I0814 16:33:55.189266   37610 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:55.189591   37610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:55.189628   37610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:55.204528   37610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41965
	I0814 16:33:55.204908   37610 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:55.205372   37610 main.go:141] libmachine: Using API Version  1
	I0814 16:33:55.205393   37610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:55.205669   37610 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:55.205870   37610 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:33:55.208726   37610 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:55.209105   37610 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:55.209129   37610 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:55.209244   37610 host.go:66] Checking if "ha-597780-m03" exists ...
	I0814 16:33:55.209584   37610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:55.209630   37610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:55.225085   37610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41403
	I0814 16:33:55.225455   37610 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:55.225872   37610 main.go:141] libmachine: Using API Version  1
	I0814 16:33:55.225901   37610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:55.226190   37610 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:55.226388   37610 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:33:55.226565   37610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:55.226582   37610 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:33:55.229113   37610 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:55.229558   37610 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:55.229586   37610 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:55.229752   37610 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:33:55.229930   37610 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:33:55.230062   37610 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:33:55.230180   37610 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:33:55.314258   37610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:55.328560   37610 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:33:55.328586   37610 api_server.go:166] Checking apiserver status ...
	I0814 16:33:55.328631   37610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:33:55.341109   37610 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup
	W0814 16:33:55.350201   37610 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:33:55.350259   37610 ssh_runner.go:195] Run: ls
	I0814 16:33:55.354041   37610 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:33:55.358305   37610 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:33:55.358325   37610 status.go:422] ha-597780-m03 apiserver status = Running (err=<nil>)
	I0814 16:33:55.358332   37610 status.go:257] ha-597780-m03 status: &{Name:ha-597780-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:33:55.358346   37610 status.go:255] checking status of ha-597780-m04 ...
	I0814 16:33:55.358660   37610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:55.358688   37610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:55.373173   37610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40839
	I0814 16:33:55.373520   37610 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:55.373939   37610 main.go:141] libmachine: Using API Version  1
	I0814 16:33:55.373959   37610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:55.374248   37610 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:55.374472   37610 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:33:55.376011   37610 status.go:330] ha-597780-m04 host status = "Running" (err=<nil>)
	I0814 16:33:55.376026   37610 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:55.376311   37610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:55.376351   37610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:55.391465   37610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40799
	I0814 16:33:55.391883   37610 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:55.392372   37610 main.go:141] libmachine: Using API Version  1
	I0814 16:33:55.392393   37610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:55.392719   37610 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:55.392884   37610 main.go:141] libmachine: (ha-597780-m04) Calling .GetIP
	I0814 16:33:55.395890   37610 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:55.396303   37610 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:55.396335   37610 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:55.396493   37610 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:33:55.396840   37610 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:55.396877   37610 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:55.411811   37610 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34583
	I0814 16:33:55.412220   37610 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:55.412709   37610 main.go:141] libmachine: Using API Version  1
	I0814 16:33:55.412733   37610 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:55.413056   37610 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:55.413257   37610 main.go:141] libmachine: (ha-597780-m04) Calling .DriverName
	I0814 16:33:55.413438   37610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:33:55.413458   37610 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHHostname
	I0814 16:33:55.416813   37610 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:55.417337   37610 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:55.417368   37610 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:55.417593   37610 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHPort
	I0814 16:33:55.417789   37610 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHKeyPath
	I0814 16:33:55.417945   37610 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHUsername
	I0814 16:33:55.418096   37610 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m04/id_rsa Username:docker}
	I0814 16:33:55.498190   37610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:33:55.512572   37610 status.go:257] ha-597780-m04 status: &{Name:ha-597780-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-597780 -n ha-597780
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-597780 logs -n 25: (1.3415916s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780:/home/docker/cp-test_ha-597780-m03_ha-597780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780 sudo cat                                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m03_ha-597780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m02:/home/docker/cp-test_ha-597780-m03_ha-597780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m02 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m03_ha-597780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04:/home/docker/cp-test_ha-597780-m03_ha-597780-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m04 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m03_ha-597780-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-597780 cp testdata/cp-test.txt                                                | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3967682573/001/cp-test_ha-597780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780:/home/docker/cp-test_ha-597780-m04_ha-597780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780 sudo cat                                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m04_ha-597780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m02:/home/docker/cp-test_ha-597780-m04_ha-597780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m02 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m04_ha-597780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03:/home/docker/cp-test_ha-597780-m04_ha-597780-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m03 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m04_ha-597780-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-597780 node stop m02 -v=7                                                     | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-597780 node start m02 -v=7                                                    | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 16:25:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 16:25:16.550739   31878 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:25:16.550860   31878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:25:16.550870   31878 out.go:304] Setting ErrFile to fd 2...
	I0814 16:25:16.550875   31878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:25:16.551070   31878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:25:16.551704   31878 out.go:298] Setting JSON to false
	I0814 16:25:16.552522   31878 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4061,"bootTime":1723648656,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:25:16.552611   31878 start.go:139] virtualization: kvm guest
	I0814 16:25:16.554763   31878 out.go:177] * [ha-597780] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:25:16.556019   31878 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:25:16.556020   31878 notify.go:220] Checking for updates...
	I0814 16:25:16.558421   31878 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:25:16.559520   31878 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:25:16.560635   31878 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:25:16.561797   31878 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:25:16.562971   31878 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:25:16.564285   31878 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:25:16.597932   31878 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 16:25:16.599009   31878 start.go:297] selected driver: kvm2
	I0814 16:25:16.599021   31878 start.go:901] validating driver "kvm2" against <nil>
	I0814 16:25:16.599032   31878 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:25:16.600027   31878 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:25:16.600112   31878 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 16:25:16.614699   31878 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 16:25:16.614764   31878 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 16:25:16.614967   31878 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:25:16.615009   31878 cni.go:84] Creating CNI manager for ""
	I0814 16:25:16.615018   31878 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0814 16:25:16.615023   31878 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0814 16:25:16.615081   31878 start.go:340] cluster config:
	{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0814 16:25:16.615167   31878 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:25:16.616850   31878 out.go:177] * Starting "ha-597780" primary control-plane node in "ha-597780" cluster
	I0814 16:25:16.617911   31878 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:25:16.617944   31878 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:25:16.617957   31878 cache.go:56] Caching tarball of preloaded images
	I0814 16:25:16.618047   31878 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 16:25:16.618061   31878 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 16:25:16.618394   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:25:16.618416   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json: {Name:mk4378090493a3a71e7f59c8a9d85581c5cdd67d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:16.618556   31878 start.go:360] acquireMachinesLock for ha-597780: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 16:25:16.618595   31878 start.go:364] duration metric: took 23.753µs to acquireMachinesLock for "ha-597780"
	I0814 16:25:16.618618   31878 start.go:93] Provisioning new machine with config: &{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:25:16.618699   31878 start.go:125] createHost starting for "" (driver="kvm2")
	I0814 16:25:16.620236   31878 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 16:25:16.620378   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:25:16.620425   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:25:16.634691   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
	I0814 16:25:16.635145   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:25:16.635712   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:25:16.635731   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:25:16.636011   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:25:16.636184   31878 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:25:16.636290   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:16.636436   31878 start.go:159] libmachine.API.Create for "ha-597780" (driver="kvm2")
	I0814 16:25:16.636472   31878 client.go:168] LocalClient.Create starting
	I0814 16:25:16.636507   31878 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem
	I0814 16:25:16.636542   31878 main.go:141] libmachine: Decoding PEM data...
	I0814 16:25:16.636559   31878 main.go:141] libmachine: Parsing certificate...
	I0814 16:25:16.636624   31878 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem
	I0814 16:25:16.636658   31878 main.go:141] libmachine: Decoding PEM data...
	I0814 16:25:16.636679   31878 main.go:141] libmachine: Parsing certificate...
	I0814 16:25:16.636704   31878 main.go:141] libmachine: Running pre-create checks...
	I0814 16:25:16.636716   31878 main.go:141] libmachine: (ha-597780) Calling .PreCreateCheck
	I0814 16:25:16.637110   31878 main.go:141] libmachine: (ha-597780) Calling .GetConfigRaw
	I0814 16:25:16.637452   31878 main.go:141] libmachine: Creating machine...
	I0814 16:25:16.637464   31878 main.go:141] libmachine: (ha-597780) Calling .Create
	I0814 16:25:16.637570   31878 main.go:141] libmachine: (ha-597780) Creating KVM machine...
	I0814 16:25:16.638908   31878 main.go:141] libmachine: (ha-597780) DBG | found existing default KVM network
	I0814 16:25:16.639577   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:16.639463   31901 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0814 16:25:16.639613   31878 main.go:141] libmachine: (ha-597780) DBG | created network xml: 
	I0814 16:25:16.639637   31878 main.go:141] libmachine: (ha-597780) DBG | <network>
	I0814 16:25:16.639650   31878 main.go:141] libmachine: (ha-597780) DBG |   <name>mk-ha-597780</name>
	I0814 16:25:16.639684   31878 main.go:141] libmachine: (ha-597780) DBG |   <dns enable='no'/>
	I0814 16:25:16.639698   31878 main.go:141] libmachine: (ha-597780) DBG |   
	I0814 16:25:16.639711   31878 main.go:141] libmachine: (ha-597780) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0814 16:25:16.639718   31878 main.go:141] libmachine: (ha-597780) DBG |     <dhcp>
	I0814 16:25:16.639727   31878 main.go:141] libmachine: (ha-597780) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0814 16:25:16.639737   31878 main.go:141] libmachine: (ha-597780) DBG |     </dhcp>
	I0814 16:25:16.639750   31878 main.go:141] libmachine: (ha-597780) DBG |   </ip>
	I0814 16:25:16.639759   31878 main.go:141] libmachine: (ha-597780) DBG |   
	I0814 16:25:16.639764   31878 main.go:141] libmachine: (ha-597780) DBG | </network>
	I0814 16:25:16.639776   31878 main.go:141] libmachine: (ha-597780) DBG | 
	I0814 16:25:16.644808   31878 main.go:141] libmachine: (ha-597780) DBG | trying to create private KVM network mk-ha-597780 192.168.39.0/24...
	I0814 16:25:16.708926   31878 main.go:141] libmachine: (ha-597780) DBG | private KVM network mk-ha-597780 192.168.39.0/24 created
	I0814 16:25:16.708967   31878 main.go:141] libmachine: (ha-597780) Setting up store path in /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780 ...
	I0814 16:25:16.708983   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:16.708894   31901 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:25:16.709002   31878 main.go:141] libmachine: (ha-597780) Building disk image from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 16:25:16.709027   31878 main.go:141] libmachine: (ha-597780) Downloading /home/jenkins/minikube-integration/19446-13977/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 16:25:16.949606   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:16.949479   31901 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa...
	I0814 16:25:17.134823   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:17.134697   31901 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/ha-597780.rawdisk...
	I0814 16:25:17.134847   31878 main.go:141] libmachine: (ha-597780) DBG | Writing magic tar header
	I0814 16:25:17.134861   31878 main.go:141] libmachine: (ha-597780) DBG | Writing SSH key tar header
	I0814 16:25:17.134872   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:17.134813   31901 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780 ...
	I0814 16:25:17.134887   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780
	I0814 16:25:17.134925   31878 main.go:141] libmachine: (ha-597780) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780 (perms=drwx------)
	I0814 16:25:17.134959   31878 main.go:141] libmachine: (ha-597780) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines (perms=drwxr-xr-x)
	I0814 16:25:17.134977   31878 main.go:141] libmachine: (ha-597780) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube (perms=drwxr-xr-x)
	I0814 16:25:17.134989   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines
	I0814 16:25:17.135014   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:25:17.135027   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977
	I0814 16:25:17.135040   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 16:25:17.135056   31878 main.go:141] libmachine: (ha-597780) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977 (perms=drwxrwxr-x)
	I0814 16:25:17.135068   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home/jenkins
	I0814 16:25:17.135081   31878 main.go:141] libmachine: (ha-597780) DBG | Checking permissions on dir: /home
	I0814 16:25:17.135093   31878 main.go:141] libmachine: (ha-597780) DBG | Skipping /home - not owner
	I0814 16:25:17.135111   31878 main.go:141] libmachine: (ha-597780) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 16:25:17.135123   31878 main.go:141] libmachine: (ha-597780) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 16:25:17.135135   31878 main.go:141] libmachine: (ha-597780) Creating domain...
	I0814 16:25:17.136109   31878 main.go:141] libmachine: (ha-597780) define libvirt domain using xml: 
	I0814 16:25:17.136137   31878 main.go:141] libmachine: (ha-597780) <domain type='kvm'>
	I0814 16:25:17.136149   31878 main.go:141] libmachine: (ha-597780)   <name>ha-597780</name>
	I0814 16:25:17.136163   31878 main.go:141] libmachine: (ha-597780)   <memory unit='MiB'>2200</memory>
	I0814 16:25:17.136174   31878 main.go:141] libmachine: (ha-597780)   <vcpu>2</vcpu>
	I0814 16:25:17.136196   31878 main.go:141] libmachine: (ha-597780)   <features>
	I0814 16:25:17.136205   31878 main.go:141] libmachine: (ha-597780)     <acpi/>
	I0814 16:25:17.136214   31878 main.go:141] libmachine: (ha-597780)     <apic/>
	I0814 16:25:17.136226   31878 main.go:141] libmachine: (ha-597780)     <pae/>
	I0814 16:25:17.136236   31878 main.go:141] libmachine: (ha-597780)     
	I0814 16:25:17.136247   31878 main.go:141] libmachine: (ha-597780)   </features>
	I0814 16:25:17.136256   31878 main.go:141] libmachine: (ha-597780)   <cpu mode='host-passthrough'>
	I0814 16:25:17.136268   31878 main.go:141] libmachine: (ha-597780)   
	I0814 16:25:17.136277   31878 main.go:141] libmachine: (ha-597780)   </cpu>
	I0814 16:25:17.136286   31878 main.go:141] libmachine: (ha-597780)   <os>
	I0814 16:25:17.136296   31878 main.go:141] libmachine: (ha-597780)     <type>hvm</type>
	I0814 16:25:17.136308   31878 main.go:141] libmachine: (ha-597780)     <boot dev='cdrom'/>
	I0814 16:25:17.136322   31878 main.go:141] libmachine: (ha-597780)     <boot dev='hd'/>
	I0814 16:25:17.136334   31878 main.go:141] libmachine: (ha-597780)     <bootmenu enable='no'/>
	I0814 16:25:17.136342   31878 main.go:141] libmachine: (ha-597780)   </os>
	I0814 16:25:17.136351   31878 main.go:141] libmachine: (ha-597780)   <devices>
	I0814 16:25:17.136361   31878 main.go:141] libmachine: (ha-597780)     <disk type='file' device='cdrom'>
	I0814 16:25:17.136376   31878 main.go:141] libmachine: (ha-597780)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/boot2docker.iso'/>
	I0814 16:25:17.136388   31878 main.go:141] libmachine: (ha-597780)       <target dev='hdc' bus='scsi'/>
	I0814 16:25:17.136401   31878 main.go:141] libmachine: (ha-597780)       <readonly/>
	I0814 16:25:17.136411   31878 main.go:141] libmachine: (ha-597780)     </disk>
	I0814 16:25:17.136422   31878 main.go:141] libmachine: (ha-597780)     <disk type='file' device='disk'>
	I0814 16:25:17.136435   31878 main.go:141] libmachine: (ha-597780)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 16:25:17.136449   31878 main.go:141] libmachine: (ha-597780)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/ha-597780.rawdisk'/>
	I0814 16:25:17.136461   31878 main.go:141] libmachine: (ha-597780)       <target dev='hda' bus='virtio'/>
	I0814 16:25:17.136475   31878 main.go:141] libmachine: (ha-597780)     </disk>
	I0814 16:25:17.136487   31878 main.go:141] libmachine: (ha-597780)     <interface type='network'>
	I0814 16:25:17.136499   31878 main.go:141] libmachine: (ha-597780)       <source network='mk-ha-597780'/>
	I0814 16:25:17.136514   31878 main.go:141] libmachine: (ha-597780)       <model type='virtio'/>
	I0814 16:25:17.136524   31878 main.go:141] libmachine: (ha-597780)     </interface>
	I0814 16:25:17.136532   31878 main.go:141] libmachine: (ha-597780)     <interface type='network'>
	I0814 16:25:17.136546   31878 main.go:141] libmachine: (ha-597780)       <source network='default'/>
	I0814 16:25:17.136558   31878 main.go:141] libmachine: (ha-597780)       <model type='virtio'/>
	I0814 16:25:17.136567   31878 main.go:141] libmachine: (ha-597780)     </interface>
	I0814 16:25:17.136578   31878 main.go:141] libmachine: (ha-597780)     <serial type='pty'>
	I0814 16:25:17.136589   31878 main.go:141] libmachine: (ha-597780)       <target port='0'/>
	I0814 16:25:17.136606   31878 main.go:141] libmachine: (ha-597780)     </serial>
	I0814 16:25:17.136621   31878 main.go:141] libmachine: (ha-597780)     <console type='pty'>
	I0814 16:25:17.136638   31878 main.go:141] libmachine: (ha-597780)       <target type='serial' port='0'/>
	I0814 16:25:17.136657   31878 main.go:141] libmachine: (ha-597780)     </console>
	I0814 16:25:17.136668   31878 main.go:141] libmachine: (ha-597780)     <rng model='virtio'>
	I0814 16:25:17.136679   31878 main.go:141] libmachine: (ha-597780)       <backend model='random'>/dev/random</backend>
	I0814 16:25:17.136689   31878 main.go:141] libmachine: (ha-597780)     </rng>
	I0814 16:25:17.136698   31878 main.go:141] libmachine: (ha-597780)     
	I0814 16:25:17.136734   31878 main.go:141] libmachine: (ha-597780)     
	I0814 16:25:17.136752   31878 main.go:141] libmachine: (ha-597780)   </devices>
	I0814 16:25:17.136815   31878 main.go:141] libmachine: (ha-597780) </domain>
	I0814 16:25:17.136844   31878 main.go:141] libmachine: (ha-597780) 
	I0814 16:25:17.140743   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:f8:cc:9d in network default
	I0814 16:25:17.141203   31878 main.go:141] libmachine: (ha-597780) Ensuring networks are active...
	I0814 16:25:17.141220   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:17.141830   31878 main.go:141] libmachine: (ha-597780) Ensuring network default is active
	I0814 16:25:17.142106   31878 main.go:141] libmachine: (ha-597780) Ensuring network mk-ha-597780 is active
	I0814 16:25:17.142507   31878 main.go:141] libmachine: (ha-597780) Getting domain xml...
	I0814 16:25:17.143143   31878 main.go:141] libmachine: (ha-597780) Creating domain...
	I0814 16:25:18.312528   31878 main.go:141] libmachine: (ha-597780) Waiting to get IP...
	I0814 16:25:18.313190   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:18.313568   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:18.313613   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:18.313562   31901 retry.go:31] will retry after 254.454148ms: waiting for machine to come up
	I0814 16:25:18.570182   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:18.570714   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:18.570755   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:18.570681   31901 retry.go:31] will retry after 324.643085ms: waiting for machine to come up
	I0814 16:25:18.897083   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:18.897461   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:18.897486   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:18.897420   31901 retry.go:31] will retry after 300.449231ms: waiting for machine to come up
	I0814 16:25:19.199898   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:19.200358   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:19.200384   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:19.200310   31901 retry.go:31] will retry after 550.899386ms: waiting for machine to come up
	I0814 16:25:19.752907   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:19.753360   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:19.753387   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:19.753308   31901 retry.go:31] will retry after 582.73846ms: waiting for machine to come up
	I0814 16:25:20.338033   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:20.338395   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:20.338423   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:20.338359   31901 retry.go:31] will retry after 661.209453ms: waiting for machine to come up
	I0814 16:25:21.000973   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:21.001278   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:21.001354   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:21.001255   31901 retry.go:31] will retry after 1.081333112s: waiting for machine to come up
	I0814 16:25:22.084264   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:22.084621   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:22.084680   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:22.084600   31901 retry.go:31] will retry after 1.016377445s: waiting for machine to come up
	I0814 16:25:23.102804   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:23.103343   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:23.103394   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:23.103275   31901 retry.go:31] will retry after 1.402260728s: waiting for machine to come up
	I0814 16:25:24.507776   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:24.508213   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:24.508236   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:24.508172   31901 retry.go:31] will retry after 2.141132665s: waiting for machine to come up
	I0814 16:25:26.650375   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:26.650778   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:26.650805   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:26.650735   31901 retry.go:31] will retry after 2.200155129s: waiting for machine to come up
	I0814 16:25:28.854009   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:28.854327   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:28.854352   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:28.854291   31901 retry.go:31] will retry after 3.179850613s: waiting for machine to come up
	I0814 16:25:32.035100   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:32.035560   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find current IP address of domain ha-597780 in network mk-ha-597780
	I0814 16:25:32.035583   31878 main.go:141] libmachine: (ha-597780) DBG | I0814 16:25:32.035512   31901 retry.go:31] will retry after 4.298197863s: waiting for machine to come up
	I0814 16:25:36.338930   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:36.339412   31878 main.go:141] libmachine: (ha-597780) Found IP for machine: 192.168.39.4
	I0814 16:25:36.339429   31878 main.go:141] libmachine: (ha-597780) Reserving static IP address...
	I0814 16:25:36.339441   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has current primary IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:36.339906   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find host DHCP lease matching {name: "ha-597780", mac: "52:54:00:d7:0e:d3", ip: "192.168.39.4"} in network mk-ha-597780
	I0814 16:25:36.412805   31878 main.go:141] libmachine: (ha-597780) DBG | Getting to WaitForSSH function...
	I0814 16:25:36.412831   31878 main.go:141] libmachine: (ha-597780) Reserved static IP address: 192.168.39.4
	I0814 16:25:36.412854   31878 main.go:141] libmachine: (ha-597780) Waiting for SSH to be available...
	I0814 16:25:36.415141   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:36.415495   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780
	I0814 16:25:36.415536   31878 main.go:141] libmachine: (ha-597780) DBG | unable to find defined IP address of network mk-ha-597780 interface with MAC address 52:54:00:d7:0e:d3
	I0814 16:25:36.415684   31878 main.go:141] libmachine: (ha-597780) DBG | Using SSH client type: external
	I0814 16:25:36.415703   31878 main.go:141] libmachine: (ha-597780) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa (-rw-------)
	I0814 16:25:36.415739   31878 main.go:141] libmachine: (ha-597780) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 16:25:36.415757   31878 main.go:141] libmachine: (ha-597780) DBG | About to run SSH command:
	I0814 16:25:36.415770   31878 main.go:141] libmachine: (ha-597780) DBG | exit 0
	I0814 16:25:36.419416   31878 main.go:141] libmachine: (ha-597780) DBG | SSH cmd err, output: exit status 255: 
	I0814 16:25:36.419439   31878 main.go:141] libmachine: (ha-597780) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0814 16:25:36.419448   31878 main.go:141] libmachine: (ha-597780) DBG | command : exit 0
	I0814 16:25:36.419460   31878 main.go:141] libmachine: (ha-597780) DBG | err     : exit status 255
	I0814 16:25:36.419473   31878 main.go:141] libmachine: (ha-597780) DBG | output  : 
	I0814 16:25:39.421510   31878 main.go:141] libmachine: (ha-597780) DBG | Getting to WaitForSSH function...
	I0814 16:25:39.424078   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.424451   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.424521   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.424593   31878 main.go:141] libmachine: (ha-597780) DBG | Using SSH client type: external
	I0814 16:25:39.424644   31878 main.go:141] libmachine: (ha-597780) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa (-rw-------)
	I0814 16:25:39.424673   31878 main.go:141] libmachine: (ha-597780) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 16:25:39.424689   31878 main.go:141] libmachine: (ha-597780) DBG | About to run SSH command:
	I0814 16:25:39.424703   31878 main.go:141] libmachine: (ha-597780) DBG | exit 0
	I0814 16:25:39.547152   31878 main.go:141] libmachine: (ha-597780) DBG | SSH cmd err, output: <nil>: 
	I0814 16:25:39.547435   31878 main.go:141] libmachine: (ha-597780) KVM machine creation complete!
	I0814 16:25:39.547760   31878 main.go:141] libmachine: (ha-597780) Calling .GetConfigRaw
	I0814 16:25:39.548271   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:39.548518   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:39.548681   31878 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 16:25:39.548694   31878 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:25:39.550030   31878 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 16:25:39.550053   31878 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 16:25:39.550061   31878 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 16:25:39.550068   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:39.552399   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.552722   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.552745   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.552887   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:39.553063   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.553209   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.553355   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:39.553488   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:25:39.553719   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:25:39.553731   31878 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 16:25:39.650454   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:25:39.650478   31878 main.go:141] libmachine: Detecting the provisioner...
	I0814 16:25:39.650488   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:39.653338   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.653756   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.653785   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.653914   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:39.654119   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.654246   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.654367   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:39.654518   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:25:39.654731   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:25:39.654754   31878 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 16:25:39.751867   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 16:25:39.751933   31878 main.go:141] libmachine: found compatible host: buildroot
	I0814 16:25:39.751942   31878 main.go:141] libmachine: Provisioning with buildroot...
	I0814 16:25:39.751949   31878 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:25:39.752189   31878 buildroot.go:166] provisioning hostname "ha-597780"
	I0814 16:25:39.752214   31878 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:25:39.752398   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:39.754819   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.755136   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.755162   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.755272   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:39.755528   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.755776   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.755908   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:39.756047   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:25:39.756223   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:25:39.756237   31878 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-597780 && echo "ha-597780" | sudo tee /etc/hostname
	I0814 16:25:39.868750   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-597780
	
	I0814 16:25:39.868781   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:39.871293   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.871681   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.871707   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.871899   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:39.872112   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.872295   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:39.872448   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:39.872690   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:25:39.872938   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:25:39.872959   31878 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-597780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-597780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-597780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 16:25:39.980882   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:25:39.980908   31878 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 16:25:39.980935   31878 buildroot.go:174] setting up certificates
	I0814 16:25:39.980951   31878 provision.go:84] configureAuth start
	I0814 16:25:39.980962   31878 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:25:39.981243   31878 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:25:39.983763   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.984094   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.984115   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.984260   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:39.986386   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.986692   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:39.986723   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:39.986857   31878 provision.go:143] copyHostCerts
	I0814 16:25:39.986891   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:25:39.986925   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 16:25:39.986938   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:25:39.987025   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 16:25:39.987135   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:25:39.987160   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 16:25:39.987169   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:25:39.987209   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 16:25:39.987284   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:25:39.987337   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 16:25:39.987348   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:25:39.987385   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 16:25:39.987460   31878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.ha-597780 san=[127.0.0.1 192.168.39.4 ha-597780 localhost minikube]
	I0814 16:25:40.130425   31878 provision.go:177] copyRemoteCerts
	I0814 16:25:40.130484   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 16:25:40.130507   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:40.133344   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.133638   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.133661   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.133827   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:40.134056   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.134235   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:40.134395   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:25:40.217025   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 16:25:40.217092   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 16:25:40.239452   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 16:25:40.239515   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0814 16:25:40.260864   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 16:25:40.260926   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 16:25:40.282296   31878 provision.go:87] duration metric: took 301.331388ms to configureAuth
	I0814 16:25:40.282331   31878 buildroot.go:189] setting minikube options for container-runtime
	I0814 16:25:40.282512   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:25:40.282579   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:40.285182   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.285501   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.285528   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.285735   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:40.285955   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.286114   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.286213   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:40.286372   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:25:40.286536   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:25:40.286552   31878 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 16:25:40.532377   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 16:25:40.532414   31878 main.go:141] libmachine: Checking connection to Docker...
	I0814 16:25:40.532424   31878 main.go:141] libmachine: (ha-597780) Calling .GetURL
	I0814 16:25:40.533632   31878 main.go:141] libmachine: (ha-597780) DBG | Using libvirt version 6000000
	I0814 16:25:40.535761   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.536096   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.536125   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.536281   31878 main.go:141] libmachine: Docker is up and running!
	I0814 16:25:40.536294   31878 main.go:141] libmachine: Reticulating splines...
	I0814 16:25:40.536309   31878 client.go:171] duration metric: took 23.899827196s to LocalClient.Create
	I0814 16:25:40.536332   31878 start.go:167] duration metric: took 23.899896998s to libmachine.API.Create "ha-597780"
	I0814 16:25:40.536354   31878 start.go:293] postStartSetup for "ha-597780" (driver="kvm2")
	I0814 16:25:40.536366   31878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 16:25:40.536381   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:40.536616   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 16:25:40.536645   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:40.538490   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.538846   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.538882   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.539016   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:40.539227   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.539456   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:40.539620   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:25:40.617331   31878 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 16:25:40.621102   31878 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 16:25:40.621123   31878 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 16:25:40.621189   31878 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 16:25:40.621277   31878 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 16:25:40.621288   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /etc/ssl/certs/211772.pem
	I0814 16:25:40.621420   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 16:25:40.630159   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:25:40.652129   31878 start.go:296] duration metric: took 115.760269ms for postStartSetup
	I0814 16:25:40.652188   31878 main.go:141] libmachine: (ha-597780) Calling .GetConfigRaw
	I0814 16:25:40.652822   31878 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:25:40.655420   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.655762   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.655789   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.656099   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:25:40.656317   31878 start.go:128] duration metric: took 24.037606425s to createHost
	I0814 16:25:40.656344   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:40.658540   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.658909   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.658936   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.659025   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:40.659204   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.659367   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.659508   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:40.659707   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:25:40.659861   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:25:40.659872   31878 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 16:25:40.755816   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723652740.735372497
	
	I0814 16:25:40.755837   31878 fix.go:216] guest clock: 1723652740.735372497
	I0814 16:25:40.755846   31878 fix.go:229] Guest: 2024-08-14 16:25:40.735372497 +0000 UTC Remote: 2024-08-14 16:25:40.656331655 +0000 UTC m=+24.138615915 (delta=79.040842ms)
	I0814 16:25:40.755868   31878 fix.go:200] guest clock delta is within tolerance: 79.040842ms
	I0814 16:25:40.755875   31878 start.go:83] releasing machines lock for "ha-597780", held for 24.137268103s
	I0814 16:25:40.755897   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:40.756161   31878 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:25:40.758861   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.759155   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.759181   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.759371   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:40.759800   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:40.759973   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:25:40.760051   31878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 16:25:40.760097   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:40.760184   31878 ssh_runner.go:195] Run: cat /version.json
	I0814 16:25:40.760208   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:25:40.762543   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.762917   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.763034   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.763061   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.763196   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:40.763283   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:40.763308   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:40.763387   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.763486   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:25:40.763566   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:40.763627   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:25:40.763720   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:25:40.763762   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:25:40.763879   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:25:40.836219   31878 ssh_runner.go:195] Run: systemctl --version
	I0814 16:25:40.872803   31878 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 16:25:41.034508   31878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 16:25:41.040114   31878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 16:25:41.040166   31878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:25:41.056211   31878 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 16:25:41.056233   31878 start.go:495] detecting cgroup driver to use...
	I0814 16:25:41.056295   31878 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 16:25:41.073872   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 16:25:41.087841   31878 docker.go:217] disabling cri-docker service (if available) ...
	I0814 16:25:41.087889   31878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 16:25:41.101436   31878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 16:25:41.114647   31878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 16:25:41.242293   31878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 16:25:41.392867   31878 docker.go:233] disabling docker service ...
	I0814 16:25:41.392925   31878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 16:25:41.406539   31878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 16:25:41.418791   31878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 16:25:41.562392   31878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 16:25:41.670141   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 16:25:41.682918   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 16:25:41.699581   31878 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 16:25:41.699640   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.708701   31878 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 16:25:41.708751   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.717814   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.726667   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.735787   31878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 16:25:41.744771   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.753853   31878 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.768967   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:25:41.778036   31878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 16:25:41.786623   31878 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 16:25:41.786690   31878 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 16:25:41.798228   31878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 16:25:41.807129   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:25:41.915590   31878 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 16:25:42.044261   31878 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 16:25:42.044324   31878 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 16:25:42.048705   31878 start.go:563] Will wait 60s for crictl version
	I0814 16:25:42.048756   31878 ssh_runner.go:195] Run: which crictl
	I0814 16:25:42.052119   31878 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 16:25:42.088329   31878 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 16:25:42.088395   31878 ssh_runner.go:195] Run: crio --version
	I0814 16:25:42.115989   31878 ssh_runner.go:195] Run: crio --version
	I0814 16:25:42.145294   31878 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 16:25:42.146545   31878 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:25:42.149223   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:42.149538   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:25:42.149569   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:25:42.149779   31878 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 16:25:42.153620   31878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:25:42.165730   31878 kubeadm.go:883] updating cluster {Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 16:25:42.165842   31878 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:25:42.165885   31878 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:25:42.200604   31878 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 16:25:42.200693   31878 ssh_runner.go:195] Run: which lz4
	I0814 16:25:42.204297   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0814 16:25:42.204391   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 16:25:42.207994   31878 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 16:25:42.208028   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 16:25:43.394127   31878 crio.go:462] duration metric: took 1.189761448s to copy over tarball
	I0814 16:25:43.394188   31878 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 16:25:45.390027   31878 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.995810476s)
	I0814 16:25:45.390064   31878 crio.go:469] duration metric: took 1.995914579s to extract the tarball
	I0814 16:25:45.390071   31878 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 16:25:45.427467   31878 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:25:45.470088   31878 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 16:25:45.470110   31878 cache_images.go:84] Images are preloaded, skipping loading
	I0814 16:25:45.470118   31878 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.31.0 crio true true} ...
	I0814 16:25:45.470219   31878 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-597780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 16:25:45.470278   31878 ssh_runner.go:195] Run: crio config
	I0814 16:25:45.515075   31878 cni.go:84] Creating CNI manager for ""
	I0814 16:25:45.515094   31878 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0814 16:25:45.515102   31878 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 16:25:45.515144   31878 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-597780 NodeName:ha-597780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 16:25:45.515274   31878 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-597780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 16:25:45.515297   31878 kube-vip.go:115] generating kube-vip config ...
	I0814 16:25:45.515353   31878 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0814 16:25:45.530503   31878 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0814 16:25:45.530621   31878 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0814 16:25:45.530694   31878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 16:25:45.539737   31878 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 16:25:45.539806   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0814 16:25:45.548183   31878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0814 16:25:45.563371   31878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 16:25:45.578355   31878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0814 16:25:45.593987   31878 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0814 16:25:45.609843   31878 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0814 16:25:45.613628   31878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:25:45.624434   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:25:45.750267   31878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:25:45.765376   31878 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780 for IP: 192.168.39.4
	I0814 16:25:45.765401   31878 certs.go:194] generating shared ca certs ...
	I0814 16:25:45.765423   31878 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:45.765631   31878 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 16:25:45.765685   31878 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 16:25:45.765699   31878 certs.go:256] generating profile certs ...
	I0814 16:25:45.765763   31878 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key
	I0814 16:25:45.765789   31878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.crt with IP's: []
	I0814 16:25:45.882404   31878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.crt ...
	I0814 16:25:45.882431   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.crt: {Name:mk5c5a98085888ca6febc66415d437d0012bb40a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:45.882602   31878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key ...
	I0814 16:25:45.882614   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key: {Name:mk7da86224abddf18d89cfe84fa53bc6be9a481f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:45.882687   31878 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.532024e0
	I0814 16:25:45.882707   31878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.532024e0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.254]
	I0814 16:25:46.097370   31878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.532024e0 ...
	I0814 16:25:46.097399   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.532024e0: {Name:mk68b70a36dbd806aacd25471a1104371a586b45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:46.097552   31878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.532024e0 ...
	I0814 16:25:46.097565   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.532024e0: {Name:mk0d223519a26ba2f37b494273f30644ffa08449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:46.097632   31878 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.532024e0 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt
	I0814 16:25:46.097718   31878 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.532024e0 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key
	I0814 16:25:46.097771   31878 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key
	I0814 16:25:46.097786   31878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt with IP's: []
	I0814 16:25:46.205695   31878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt ...
	I0814 16:25:46.205725   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt: {Name:mkdf9a77f4c8f8d2c0e1538b16a9760abb4ed441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:46.205897   31878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key ...
	I0814 16:25:46.205910   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key: {Name:mk17bddeba50b7cc1228cf21c55462eb62fa48ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:25:46.205997   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 16:25:46.206016   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 16:25:46.206032   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 16:25:46.206057   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 16:25:46.206079   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 16:25:46.206096   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 16:25:46.206111   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 16:25:46.206125   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 16:25:46.206176   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 16:25:46.206220   31878 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 16:25:46.206228   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 16:25:46.206254   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 16:25:46.206279   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 16:25:46.206310   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 16:25:46.206351   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:25:46.206382   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem -> /usr/share/ca-certificates/21177.pem
	I0814 16:25:46.206398   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /usr/share/ca-certificates/211772.pem
	I0814 16:25:46.206413   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:25:46.206977   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 16:25:46.230888   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 16:25:46.252235   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 16:25:46.273903   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 16:25:46.294762   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 16:25:46.316062   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 16:25:46.337249   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 16:25:46.358453   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 16:25:46.380800   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 16:25:46.403386   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 16:25:46.431042   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 16:25:46.458718   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 16:25:46.475992   31878 ssh_runner.go:195] Run: openssl version
	I0814 16:25:46.481464   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 16:25:46.491579   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 16:25:46.495749   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 16:25:46.495791   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 16:25:46.501365   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 16:25:46.511290   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 16:25:46.526773   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 16:25:46.532476   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 16:25:46.532540   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 16:25:46.540197   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 16:25:46.552276   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 16:25:46.566795   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:25:46.571978   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:25:46.572021   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:25:46.577549   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 16:25:46.592133   31878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 16:25:46.596232   31878 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 16:25:46.596288   31878 kubeadm.go:392] StartCluster: {Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:25:46.596374   31878 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 16:25:46.596429   31878 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 16:25:46.630026   31878 cri.go:89] found id: ""
	I0814 16:25:46.630109   31878 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 16:25:46.639521   31878 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 16:25:46.648484   31878 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 16:25:46.656920   31878 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 16:25:46.656939   31878 kubeadm.go:157] found existing configuration files:
	
	I0814 16:25:46.656989   31878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 16:25:46.665224   31878 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 16:25:46.665273   31878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 16:25:46.673373   31878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 16:25:46.681173   31878 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 16:25:46.681232   31878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 16:25:46.689378   31878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 16:25:46.697165   31878 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 16:25:46.697211   31878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 16:25:46.705214   31878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 16:25:46.712909   31878 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 16:25:46.712967   31878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 16:25:46.721049   31878 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 16:25:46.803347   31878 kubeadm.go:310] W0814 16:25:46.788453     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 16:25:46.804143   31878 kubeadm.go:310] W0814 16:25:46.789487     846 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 16:25:46.906447   31878 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 16:26:00.479126   31878 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 16:26:00.479193   31878 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 16:26:00.479275   31878 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 16:26:00.479406   31878 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 16:26:00.479551   31878 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 16:26:00.479650   31878 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 16:26:00.481173   31878 out.go:204]   - Generating certificates and keys ...
	I0814 16:26:00.481261   31878 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 16:26:00.481333   31878 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 16:26:00.481418   31878 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 16:26:00.481493   31878 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 16:26:00.481581   31878 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 16:26:00.481651   31878 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 16:26:00.481714   31878 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 16:26:00.481865   31878 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-597780 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0814 16:26:00.481949   31878 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 16:26:00.482057   31878 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-597780 localhost] and IPs [192.168.39.4 127.0.0.1 ::1]
	I0814 16:26:00.482134   31878 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 16:26:00.482227   31878 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 16:26:00.482323   31878 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 16:26:00.482408   31878 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 16:26:00.482488   31878 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 16:26:00.482578   31878 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 16:26:00.482623   31878 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 16:26:00.482707   31878 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 16:26:00.482791   31878 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 16:26:00.482872   31878 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 16:26:00.482952   31878 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 16:26:00.484291   31878 out.go:204]   - Booting up control plane ...
	I0814 16:26:00.484406   31878 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 16:26:00.484484   31878 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 16:26:00.484558   31878 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 16:26:00.484671   31878 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 16:26:00.484795   31878 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 16:26:00.484872   31878 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 16:26:00.484988   31878 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 16:26:00.485106   31878 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 16:26:00.485161   31878 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.682567ms
	I0814 16:26:00.485232   31878 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 16:26:00.485327   31878 kubeadm.go:310] [api-check] The API server is healthy after 9.053757546s
	I0814 16:26:00.485475   31878 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 16:26:00.485659   31878 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 16:26:00.485712   31878 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 16:26:00.485922   31878 kubeadm.go:310] [mark-control-plane] Marking the node ha-597780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 16:26:00.486002   31878 kubeadm.go:310] [bootstrap-token] Using token: 3teiyp.0zkkksy6kl58w9xk
	I0814 16:26:00.487553   31878 out.go:204]   - Configuring RBAC rules ...
	I0814 16:26:00.487681   31878 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 16:26:00.487759   31878 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 16:26:00.487925   31878 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 16:26:00.488072   31878 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 16:26:00.488226   31878 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 16:26:00.488363   31878 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 16:26:00.488496   31878 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 16:26:00.488601   31878 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 16:26:00.488665   31878 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 16:26:00.488675   31878 kubeadm.go:310] 
	I0814 16:26:00.488748   31878 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 16:26:00.488758   31878 kubeadm.go:310] 
	I0814 16:26:00.488854   31878 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 16:26:00.488863   31878 kubeadm.go:310] 
	I0814 16:26:00.488899   31878 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 16:26:00.488974   31878 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 16:26:00.489049   31878 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 16:26:00.489056   31878 kubeadm.go:310] 
	I0814 16:26:00.489134   31878 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 16:26:00.489143   31878 kubeadm.go:310] 
	I0814 16:26:00.489206   31878 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 16:26:00.489226   31878 kubeadm.go:310] 
	I0814 16:26:00.489287   31878 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 16:26:00.489438   31878 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 16:26:00.489542   31878 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 16:26:00.489551   31878 kubeadm.go:310] 
	I0814 16:26:00.489652   31878 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 16:26:00.489718   31878 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 16:26:00.489725   31878 kubeadm.go:310] 
	I0814 16:26:00.489843   31878 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3teiyp.0zkkksy6kl58w9xk \
	I0814 16:26:00.489993   31878 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 16:26:00.490025   31878 kubeadm.go:310] 	--control-plane 
	I0814 16:26:00.490030   31878 kubeadm.go:310] 
	I0814 16:26:00.490144   31878 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 16:26:00.490156   31878 kubeadm.go:310] 
	I0814 16:26:00.490265   31878 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3teiyp.0zkkksy6kl58w9xk \
	I0814 16:26:00.490451   31878 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 16:26:00.490469   31878 cni.go:84] Creating CNI manager for ""
	I0814 16:26:00.490474   31878 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0814 16:26:00.492190   31878 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0814 16:26:00.493484   31878 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0814 16:26:00.498701   31878 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0814 16:26:00.498717   31878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0814 16:26:00.514828   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0814 16:26:00.910545   31878 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 16:26:00.910633   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:26:00.910644   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-597780 minikube.k8s.io/updated_at=2024_08_14T16_26_00_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=ha-597780 minikube.k8s.io/primary=true
	I0814 16:26:01.073531   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:26:01.117729   31878 ops.go:34] apiserver oom_adj: -16
	I0814 16:26:01.574464   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 16:26:01.671358   31878 kubeadm.go:1113] duration metric: took 760.793717ms to wait for elevateKubeSystemPrivileges
	I0814 16:26:01.671403   31878 kubeadm.go:394] duration metric: took 15.075119104s to StartCluster
	I0814 16:26:01.671425   31878 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:26:01.671514   31878 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:26:01.672172   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:26:01.672419   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 16:26:01.672425   31878 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:26:01.672451   31878 start.go:241] waiting for startup goroutines ...
	I0814 16:26:01.672471   31878 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 16:26:01.672540   31878 addons.go:69] Setting storage-provisioner=true in profile "ha-597780"
	I0814 16:26:01.672549   31878 addons.go:69] Setting default-storageclass=true in profile "ha-597780"
	I0814 16:26:01.672570   31878 addons.go:234] Setting addon storage-provisioner=true in "ha-597780"
	I0814 16:26:01.672575   31878 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-597780"
	I0814 16:26:01.672600   31878 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:26:01.672631   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:26:01.673005   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:01.673021   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:01.673042   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:01.673052   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:01.687831   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0814 16:26:01.688157   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I0814 16:26:01.688289   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:01.688603   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:01.688835   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:01.688858   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:01.689098   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:01.689126   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:01.689251   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:01.689441   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:01.689622   31878 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:26:01.689886   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:01.689922   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:01.691907   31878 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:26:01.692231   31878 kapi.go:59] client config for ha-597780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key", CAFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f170c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0814 16:26:01.692696   31878 cert_rotation.go:140] Starting client certificate rotation controller
	I0814 16:26:01.692964   31878 addons.go:234] Setting addon default-storageclass=true in "ha-597780"
	I0814 16:26:01.693003   31878 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:26:01.693357   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:01.693387   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:01.705150   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0814 16:26:01.705670   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:01.706203   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:01.706230   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:01.706536   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:01.706754   31878 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:26:01.708441   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:26:01.708491   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33399
	I0814 16:26:01.708865   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:01.709542   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:01.709557   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:01.709849   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:01.710283   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:01.710296   31878 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 16:26:01.710323   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:01.711487   31878 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 16:26:01.711506   31878 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 16:26:01.711524   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:26:01.714640   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:26:01.715112   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:26:01.715133   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:26:01.715305   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:26:01.715482   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:26:01.715667   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:26:01.715841   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:26:01.725065   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45303
	I0814 16:26:01.725432   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:01.725858   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:01.725877   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:01.726170   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:01.726366   31878 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:26:01.727693   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:26:01.727908   31878 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 16:26:01.727922   31878 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 16:26:01.727934   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:26:01.730593   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:26:01.730926   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:26:01.730953   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:26:01.731078   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:26:01.731218   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:26:01.731391   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:26:01.731504   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:26:01.798415   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 16:26:01.816423   31878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 16:26:01.847088   31878 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 16:26:02.254415   31878 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0814 16:26:02.499957   31878 main.go:141] libmachine: Making call to close driver server
	I0814 16:26:02.499980   31878 main.go:141] libmachine: (ha-597780) Calling .Close
	I0814 16:26:02.500014   31878 main.go:141] libmachine: Making call to close driver server
	I0814 16:26:02.500073   31878 main.go:141] libmachine: (ha-597780) Calling .Close
	I0814 16:26:02.500277   31878 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:26:02.500335   31878 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:26:02.500359   31878 main.go:141] libmachine: Making call to close driver server
	I0814 16:26:02.500385   31878 main.go:141] libmachine: (ha-597780) Calling .Close
	I0814 16:26:02.500389   31878 main.go:141] libmachine: (ha-597780) DBG | Closing plugin on server side
	I0814 16:26:02.500384   31878 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:26:02.500418   31878 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:26:02.500435   31878 main.go:141] libmachine: Making call to close driver server
	I0814 16:26:02.500447   31878 main.go:141] libmachine: (ha-597780) Calling .Close
	I0814 16:26:02.500616   31878 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:26:02.500635   31878 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:26:02.500684   31878 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:26:02.500699   31878 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:26:02.500706   31878 main.go:141] libmachine: (ha-597780) DBG | Closing plugin on server side
	I0814 16:26:02.500750   31878 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0814 16:26:02.500770   31878 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0814 16:26:02.500855   31878 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0814 16:26:02.500866   31878 round_trippers.go:469] Request Headers:
	I0814 16:26:02.500877   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:26:02.500887   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:26:02.511730   31878 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0814 16:26:02.512383   31878 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0814 16:26:02.512400   31878 round_trippers.go:469] Request Headers:
	I0814 16:26:02.512408   31878 round_trippers.go:473]     Content-Type: application/json
	I0814 16:26:02.512415   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:26:02.512422   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:26:02.517415   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:26:02.517567   31878 main.go:141] libmachine: Making call to close driver server
	I0814 16:26:02.517578   31878 main.go:141] libmachine: (ha-597780) Calling .Close
	I0814 16:26:02.517897   31878 main.go:141] libmachine: Successfully made call to close driver server
	I0814 16:26:02.517923   31878 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 16:26:02.517934   31878 main.go:141] libmachine: (ha-597780) DBG | Closing plugin on server side
	I0814 16:26:02.519809   31878 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0814 16:26:02.521080   31878 addons.go:510] duration metric: took 848.618532ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0814 16:26:02.521109   31878 start.go:246] waiting for cluster config update ...
	I0814 16:26:02.521119   31878 start.go:255] writing updated cluster config ...
	I0814 16:26:02.522687   31878 out.go:177] 
	I0814 16:26:02.524065   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:26:02.524136   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:26:02.525889   31878 out.go:177] * Starting "ha-597780-m02" control-plane node in "ha-597780" cluster
	I0814 16:26:02.527045   31878 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:26:02.527072   31878 cache.go:56] Caching tarball of preloaded images
	I0814 16:26:02.527169   31878 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 16:26:02.527182   31878 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 16:26:02.527277   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:26:02.527545   31878 start.go:360] acquireMachinesLock for ha-597780-m02: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 16:26:02.527617   31878 start.go:364] duration metric: took 50.662µs to acquireMachinesLock for "ha-597780-m02"
	I0814 16:26:02.527642   31878 start.go:93] Provisioning new machine with config: &{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:26:02.527782   31878 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0814 16:26:02.530447   31878 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 16:26:02.530557   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:02.530587   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:02.545661   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44331
	I0814 16:26:02.546072   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:02.546637   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:02.546664   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:02.547063   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:02.547287   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetMachineName
	I0814 16:26:02.547456   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:02.547608   31878 start.go:159] libmachine.API.Create for "ha-597780" (driver="kvm2")
	I0814 16:26:02.547630   31878 client.go:168] LocalClient.Create starting
	I0814 16:26:02.547671   31878 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem
	I0814 16:26:02.547710   31878 main.go:141] libmachine: Decoding PEM data...
	I0814 16:26:02.547726   31878 main.go:141] libmachine: Parsing certificate...
	I0814 16:26:02.547776   31878 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem
	I0814 16:26:02.547794   31878 main.go:141] libmachine: Decoding PEM data...
	I0814 16:26:02.547806   31878 main.go:141] libmachine: Parsing certificate...
	I0814 16:26:02.547822   31878 main.go:141] libmachine: Running pre-create checks...
	I0814 16:26:02.547830   31878 main.go:141] libmachine: (ha-597780-m02) Calling .PreCreateCheck
	I0814 16:26:02.547987   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetConfigRaw
	I0814 16:26:02.548414   31878 main.go:141] libmachine: Creating machine...
	I0814 16:26:02.548428   31878 main.go:141] libmachine: (ha-597780-m02) Calling .Create
	I0814 16:26:02.548567   31878 main.go:141] libmachine: (ha-597780-m02) Creating KVM machine...
	I0814 16:26:02.549806   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found existing default KVM network
	I0814 16:26:02.549997   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found existing private KVM network mk-ha-597780
	I0814 16:26:02.550165   31878 main.go:141] libmachine: (ha-597780-m02) Setting up store path in /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02 ...
	I0814 16:26:02.550189   31878 main.go:141] libmachine: (ha-597780-m02) Building disk image from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 16:26:02.550255   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:02.550151   32270 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:26:02.550367   31878 main.go:141] libmachine: (ha-597780-m02) Downloading /home/jenkins/minikube-integration/19446-13977/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 16:26:02.783188   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:02.783060   32270 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa...
	I0814 16:26:03.055543   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:03.055379   32270 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/ha-597780-m02.rawdisk...
	I0814 16:26:03.055587   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Writing magic tar header
	I0814 16:26:03.055602   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Writing SSH key tar header
	I0814 16:26:03.055611   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:03.055490   32270 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02 ...
	I0814 16:26:03.055621   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02
	I0814 16:26:03.055629   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines
	I0814 16:26:03.055641   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:26:03.055651   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977
	I0814 16:26:03.055662   31878 main.go:141] libmachine: (ha-597780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02 (perms=drwx------)
	I0814 16:26:03.055676   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 16:26:03.055691   31878 main.go:141] libmachine: (ha-597780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines (perms=drwxr-xr-x)
	I0814 16:26:03.055704   31878 main.go:141] libmachine: (ha-597780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube (perms=drwxr-xr-x)
	I0814 16:26:03.055711   31878 main.go:141] libmachine: (ha-597780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977 (perms=drwxrwxr-x)
	I0814 16:26:03.055720   31878 main.go:141] libmachine: (ha-597780-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 16:26:03.055727   31878 main.go:141] libmachine: (ha-597780-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 16:26:03.055737   31878 main.go:141] libmachine: (ha-597780-m02) Creating domain...
	I0814 16:26:03.055755   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home/jenkins
	I0814 16:26:03.055769   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Checking permissions on dir: /home
	I0814 16:26:03.055780   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Skipping /home - not owner
	I0814 16:26:03.056673   31878 main.go:141] libmachine: (ha-597780-m02) define libvirt domain using xml: 
	I0814 16:26:03.056691   31878 main.go:141] libmachine: (ha-597780-m02) <domain type='kvm'>
	I0814 16:26:03.056699   31878 main.go:141] libmachine: (ha-597780-m02)   <name>ha-597780-m02</name>
	I0814 16:26:03.056704   31878 main.go:141] libmachine: (ha-597780-m02)   <memory unit='MiB'>2200</memory>
	I0814 16:26:03.056710   31878 main.go:141] libmachine: (ha-597780-m02)   <vcpu>2</vcpu>
	I0814 16:26:03.056715   31878 main.go:141] libmachine: (ha-597780-m02)   <features>
	I0814 16:26:03.056720   31878 main.go:141] libmachine: (ha-597780-m02)     <acpi/>
	I0814 16:26:03.056727   31878 main.go:141] libmachine: (ha-597780-m02)     <apic/>
	I0814 16:26:03.056732   31878 main.go:141] libmachine: (ha-597780-m02)     <pae/>
	I0814 16:26:03.056736   31878 main.go:141] libmachine: (ha-597780-m02)     
	I0814 16:26:03.056742   31878 main.go:141] libmachine: (ha-597780-m02)   </features>
	I0814 16:26:03.056750   31878 main.go:141] libmachine: (ha-597780-m02)   <cpu mode='host-passthrough'>
	I0814 16:26:03.056756   31878 main.go:141] libmachine: (ha-597780-m02)   
	I0814 16:26:03.056762   31878 main.go:141] libmachine: (ha-597780-m02)   </cpu>
	I0814 16:26:03.056785   31878 main.go:141] libmachine: (ha-597780-m02)   <os>
	I0814 16:26:03.056800   31878 main.go:141] libmachine: (ha-597780-m02)     <type>hvm</type>
	I0814 16:26:03.056809   31878 main.go:141] libmachine: (ha-597780-m02)     <boot dev='cdrom'/>
	I0814 16:26:03.056814   31878 main.go:141] libmachine: (ha-597780-m02)     <boot dev='hd'/>
	I0814 16:26:03.056823   31878 main.go:141] libmachine: (ha-597780-m02)     <bootmenu enable='no'/>
	I0814 16:26:03.056828   31878 main.go:141] libmachine: (ha-597780-m02)   </os>
	I0814 16:26:03.056839   31878 main.go:141] libmachine: (ha-597780-m02)   <devices>
	I0814 16:26:03.056845   31878 main.go:141] libmachine: (ha-597780-m02)     <disk type='file' device='cdrom'>
	I0814 16:26:03.056856   31878 main.go:141] libmachine: (ha-597780-m02)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/boot2docker.iso'/>
	I0814 16:26:03.056865   31878 main.go:141] libmachine: (ha-597780-m02)       <target dev='hdc' bus='scsi'/>
	I0814 16:26:03.056883   31878 main.go:141] libmachine: (ha-597780-m02)       <readonly/>
	I0814 16:26:03.056899   31878 main.go:141] libmachine: (ha-597780-m02)     </disk>
	I0814 16:26:03.056912   31878 main.go:141] libmachine: (ha-597780-m02)     <disk type='file' device='disk'>
	I0814 16:26:03.056919   31878 main.go:141] libmachine: (ha-597780-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 16:26:03.056934   31878 main.go:141] libmachine: (ha-597780-m02)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/ha-597780-m02.rawdisk'/>
	I0814 16:26:03.056950   31878 main.go:141] libmachine: (ha-597780-m02)       <target dev='hda' bus='virtio'/>
	I0814 16:26:03.056963   31878 main.go:141] libmachine: (ha-597780-m02)     </disk>
	I0814 16:26:03.056976   31878 main.go:141] libmachine: (ha-597780-m02)     <interface type='network'>
	I0814 16:26:03.056989   31878 main.go:141] libmachine: (ha-597780-m02)       <source network='mk-ha-597780'/>
	I0814 16:26:03.057000   31878 main.go:141] libmachine: (ha-597780-m02)       <model type='virtio'/>
	I0814 16:26:03.057011   31878 main.go:141] libmachine: (ha-597780-m02)     </interface>
	I0814 16:26:03.057021   31878 main.go:141] libmachine: (ha-597780-m02)     <interface type='network'>
	I0814 16:26:03.057033   31878 main.go:141] libmachine: (ha-597780-m02)       <source network='default'/>
	I0814 16:26:03.057046   31878 main.go:141] libmachine: (ha-597780-m02)       <model type='virtio'/>
	I0814 16:26:03.057063   31878 main.go:141] libmachine: (ha-597780-m02)     </interface>
	I0814 16:26:03.057079   31878 main.go:141] libmachine: (ha-597780-m02)     <serial type='pty'>
	I0814 16:26:03.057090   31878 main.go:141] libmachine: (ha-597780-m02)       <target port='0'/>
	I0814 16:26:03.057098   31878 main.go:141] libmachine: (ha-597780-m02)     </serial>
	I0814 16:26:03.057108   31878 main.go:141] libmachine: (ha-597780-m02)     <console type='pty'>
	I0814 16:26:03.057119   31878 main.go:141] libmachine: (ha-597780-m02)       <target type='serial' port='0'/>
	I0814 16:26:03.057130   31878 main.go:141] libmachine: (ha-597780-m02)     </console>
	I0814 16:26:03.057140   31878 main.go:141] libmachine: (ha-597780-m02)     <rng model='virtio'>
	I0814 16:26:03.057153   31878 main.go:141] libmachine: (ha-597780-m02)       <backend model='random'>/dev/random</backend>
	I0814 16:26:03.057163   31878 main.go:141] libmachine: (ha-597780-m02)     </rng>
	I0814 16:26:03.057171   31878 main.go:141] libmachine: (ha-597780-m02)     
	I0814 16:26:03.057180   31878 main.go:141] libmachine: (ha-597780-m02)     
	I0814 16:26:03.057192   31878 main.go:141] libmachine: (ha-597780-m02)   </devices>
	I0814 16:26:03.057205   31878 main.go:141] libmachine: (ha-597780-m02) </domain>
	I0814 16:26:03.057219   31878 main.go:141] libmachine: (ha-597780-m02) 
	I0814 16:26:03.064138   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:f2:f7:9d in network default
	I0814 16:26:03.064762   31878 main.go:141] libmachine: (ha-597780-m02) Ensuring networks are active...
	I0814 16:26:03.064786   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:03.065457   31878 main.go:141] libmachine: (ha-597780-m02) Ensuring network default is active
	I0814 16:26:03.065752   31878 main.go:141] libmachine: (ha-597780-m02) Ensuring network mk-ha-597780 is active
	I0814 16:26:03.066114   31878 main.go:141] libmachine: (ha-597780-m02) Getting domain xml...
	I0814 16:26:03.066935   31878 main.go:141] libmachine: (ha-597780-m02) Creating domain...
	I0814 16:26:04.286666   31878 main.go:141] libmachine: (ha-597780-m02) Waiting to get IP...
	I0814 16:26:04.287534   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:04.287963   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:04.288008   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:04.287947   32270 retry.go:31] will retry after 284.974697ms: waiting for machine to come up
	I0814 16:26:04.574439   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:04.574948   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:04.574982   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:04.574905   32270 retry.go:31] will retry after 302.655814ms: waiting for machine to come up
	I0814 16:26:04.879559   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:04.880069   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:04.880095   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:04.880024   32270 retry.go:31] will retry after 418.223326ms: waiting for machine to come up
	I0814 16:26:05.299625   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:05.300130   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:05.300157   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:05.300082   32270 retry.go:31] will retry after 429.163095ms: waiting for machine to come up
	I0814 16:26:05.730403   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:05.730794   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:05.730820   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:05.730767   32270 retry.go:31] will retry after 570.642173ms: waiting for machine to come up
	I0814 16:26:06.303597   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:06.304125   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:06.304152   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:06.304081   32270 retry.go:31] will retry after 714.864202ms: waiting for machine to come up
	I0814 16:26:07.020905   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:07.021301   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:07.021340   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:07.021271   32270 retry.go:31] will retry after 1.021402695s: waiting for machine to come up
	I0814 16:26:08.044492   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:08.045020   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:08.045044   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:08.044979   32270 retry.go:31] will retry after 1.125931245s: waiting for machine to come up
	I0814 16:26:09.172396   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:09.172980   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:09.173010   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:09.172925   32270 retry.go:31] will retry after 1.215910282s: waiting for machine to come up
	I0814 16:26:10.390312   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:10.390900   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:10.390931   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:10.390850   32270 retry.go:31] will retry after 1.997454268s: waiting for machine to come up
	I0814 16:26:12.390167   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:12.390590   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:12.390617   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:12.390553   32270 retry.go:31] will retry after 1.986753055s: waiting for machine to come up
	I0814 16:26:14.379278   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:14.379718   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:14.379749   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:14.379679   32270 retry.go:31] will retry after 2.641653092s: waiting for machine to come up
	I0814 16:26:17.024462   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:17.024995   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:17.025018   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:17.024952   32270 retry.go:31] will retry after 2.84006709s: waiting for machine to come up
	I0814 16:26:19.868041   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:19.868476   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find current IP address of domain ha-597780-m02 in network mk-ha-597780
	I0814 16:26:19.868502   31878 main.go:141] libmachine: (ha-597780-m02) DBG | I0814 16:26:19.868432   32270 retry.go:31] will retry after 3.47024794s: waiting for machine to come up
	I0814 16:26:23.340057   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.340526   31878 main.go:141] libmachine: (ha-597780-m02) Found IP for machine: 192.168.39.225
	I0814 16:26:23.340549   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has current primary IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.340556   31878 main.go:141] libmachine: (ha-597780-m02) Reserving static IP address...
	I0814 16:26:23.340995   31878 main.go:141] libmachine: (ha-597780-m02) DBG | unable to find host DHCP lease matching {name: "ha-597780-m02", mac: "52:54:00:a6:ae:4d", ip: "192.168.39.225"} in network mk-ha-597780
	I0814 16:26:23.412027   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Getting to WaitForSSH function...
	I0814 16:26:23.412067   31878 main.go:141] libmachine: (ha-597780-m02) Reserved static IP address: 192.168.39.225
	I0814 16:26:23.412084   31878 main.go:141] libmachine: (ha-597780-m02) Waiting for SSH to be available...
	I0814 16:26:23.414819   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.415353   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:23.415389   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.415464   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Using SSH client type: external
	I0814 16:26:23.415488   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa (-rw-------)
	I0814 16:26:23.415519   31878 main.go:141] libmachine: (ha-597780-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 16:26:23.415536   31878 main.go:141] libmachine: (ha-597780-m02) DBG | About to run SSH command:
	I0814 16:26:23.415548   31878 main.go:141] libmachine: (ha-597780-m02) DBG | exit 0
	I0814 16:26:23.543508   31878 main.go:141] libmachine: (ha-597780-m02) DBG | SSH cmd err, output: <nil>: 
	I0814 16:26:23.543804   31878 main.go:141] libmachine: (ha-597780-m02) KVM machine creation complete!
	I0814 16:26:23.544081   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetConfigRaw
	I0814 16:26:23.544649   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:23.544868   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:23.545013   31878 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 16:26:23.545039   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:26:23.546192   31878 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 16:26:23.546209   31878 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 16:26:23.546217   31878 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 16:26:23.546226   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:23.548633   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.549018   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:23.549048   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.549162   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:23.549343   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.549479   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.549582   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:23.549720   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:26:23.549944   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0814 16:26:23.549955   31878 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 16:26:23.654797   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:26:23.654820   31878 main.go:141] libmachine: Detecting the provisioner...
	I0814 16:26:23.654829   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:23.658593   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.659070   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:23.659100   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.659254   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:23.659459   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.659659   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.659814   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:23.659950   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:26:23.660113   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0814 16:26:23.660122   31878 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 16:26:23.763742   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 16:26:23.763808   31878 main.go:141] libmachine: found compatible host: buildroot
	I0814 16:26:23.763818   31878 main.go:141] libmachine: Provisioning with buildroot...
	I0814 16:26:23.763829   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetMachineName
	I0814 16:26:23.764038   31878 buildroot.go:166] provisioning hostname "ha-597780-m02"
	I0814 16:26:23.764060   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetMachineName
	I0814 16:26:23.764242   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:23.766923   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.767359   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:23.767441   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.767471   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:23.767648   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.767780   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.767883   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:23.768049   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:26:23.768210   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0814 16:26:23.768221   31878 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-597780-m02 && echo "ha-597780-m02" | sudo tee /etc/hostname
	I0814 16:26:23.884232   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-597780-m02
	
	I0814 16:26:23.884256   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:23.887354   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.887725   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:23.887754   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:23.887986   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:23.888181   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.888400   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:23.888533   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:23.888694   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:26:23.888855   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0814 16:26:23.888871   31878 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-597780-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-597780-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-597780-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 16:26:23.999352   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:26:23.999387   31878 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 16:26:23.999410   31878 buildroot.go:174] setting up certificates
	I0814 16:26:23.999428   31878 provision.go:84] configureAuth start
	I0814 16:26:23.999448   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetMachineName
	I0814 16:26:23.999743   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:26:24.003017   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.003410   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.003444   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.003644   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:24.006103   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.006490   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.006539   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.006751   31878 provision.go:143] copyHostCerts
	I0814 16:26:24.006781   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:26:24.006821   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 16:26:24.006832   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:26:24.006902   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 16:26:24.006977   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:26:24.006995   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 16:26:24.007001   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:26:24.007025   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 16:26:24.007067   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:26:24.007083   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 16:26:24.007089   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:26:24.007117   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 16:26:24.007169   31878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.ha-597780-m02 san=[127.0.0.1 192.168.39.225 ha-597780-m02 localhost minikube]
	I0814 16:26:24.231041   31878 provision.go:177] copyRemoteCerts
	I0814 16:26:24.231099   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 16:26:24.231121   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:24.233659   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.233972   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.234000   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.234192   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:24.234381   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.234562   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:24.234701   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	I0814 16:26:24.317482   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 16:26:24.317565   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 16:26:24.341676   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 16:26:24.341753   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0814 16:26:24.364442   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 16:26:24.364525   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 16:26:24.385840   31878 provision.go:87] duration metric: took 386.39693ms to configureAuth
	I0814 16:26:24.385866   31878 buildroot.go:189] setting minikube options for container-runtime
	I0814 16:26:24.386068   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:26:24.386144   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:24.389078   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.389379   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.389411   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.389539   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:24.389764   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.389920   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.390034   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:24.390194   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:26:24.390385   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0814 16:26:24.390404   31878 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 16:26:24.649600   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 16:26:24.649624   31878 main.go:141] libmachine: Checking connection to Docker...
	I0814 16:26:24.649633   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetURL
	I0814 16:26:24.651070   31878 main.go:141] libmachine: (ha-597780-m02) DBG | Using libvirt version 6000000
	I0814 16:26:24.653597   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.653953   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.653981   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.654126   31878 main.go:141] libmachine: Docker is up and running!
	I0814 16:26:24.654152   31878 main.go:141] libmachine: Reticulating splines...
	I0814 16:26:24.654174   31878 client.go:171] duration metric: took 22.106515659s to LocalClient.Create
	I0814 16:26:24.654210   31878 start.go:167] duration metric: took 22.106603682s to libmachine.API.Create "ha-597780"
	I0814 16:26:24.654222   31878 start.go:293] postStartSetup for "ha-597780-m02" (driver="kvm2")
	I0814 16:26:24.654237   31878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 16:26:24.654257   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:24.654507   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 16:26:24.654535   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:24.656700   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.657012   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.657044   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.657162   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:24.657333   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.657488   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:24.657704   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	I0814 16:26:24.737221   31878 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 16:26:24.741077   31878 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 16:26:24.741102   31878 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 16:26:24.741166   31878 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 16:26:24.741249   31878 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 16:26:24.741262   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /etc/ssl/certs/211772.pem
	I0814 16:26:24.741367   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 16:26:24.750247   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:26:24.772658   31878 start.go:296] duration metric: took 118.421827ms for postStartSetup
	I0814 16:26:24.772699   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetConfigRaw
	I0814 16:26:24.773303   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:26:24.776032   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.776377   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.776418   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.776612   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:26:24.776786   31878 start.go:128] duration metric: took 22.248990351s to createHost
	I0814 16:26:24.776808   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:24.778808   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.779103   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.779131   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.779232   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:24.779424   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.779643   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.779797   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:24.779964   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:26:24.780190   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0814 16:26:24.780208   31878 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 16:26:24.883637   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723652784.859430769
	
	I0814 16:26:24.883659   31878 fix.go:216] guest clock: 1723652784.859430769
	I0814 16:26:24.883669   31878 fix.go:229] Guest: 2024-08-14 16:26:24.859430769 +0000 UTC Remote: 2024-08-14 16:26:24.776797078 +0000 UTC m=+68.259081330 (delta=82.633691ms)
	I0814 16:26:24.883687   31878 fix.go:200] guest clock delta is within tolerance: 82.633691ms
	I0814 16:26:24.883694   31878 start.go:83] releasing machines lock for "ha-597780-m02", held for 22.356065528s
	I0814 16:26:24.883717   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:24.884003   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:26:24.886630   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.886977   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.887007   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.889527   31878 out.go:177] * Found network options:
	I0814 16:26:24.890898   31878 out.go:177]   - NO_PROXY=192.168.39.4
	W0814 16:26:24.892203   31878 proxy.go:119] fail to check proxy env: Error ip not in block
	I0814 16:26:24.892251   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:24.892770   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:24.892991   31878 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:26:24.893074   31878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 16:26:24.893118   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	W0814 16:26:24.893204   31878 proxy.go:119] fail to check proxy env: Error ip not in block
	I0814 16:26:24.893275   31878 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 16:26:24.893296   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:26:24.895815   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.896074   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.896253   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.896282   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.896447   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:24.896561   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:24.896594   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:24.896636   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.896754   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:26:24.896910   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:26:24.896912   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:24.897118   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:26:24.897112   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	I0814 16:26:24.897291   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	I0814 16:26:25.123383   31878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 16:26:25.128927   31878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 16:26:25.128982   31878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:26:25.144488   31878 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 16:26:25.144515   31878 start.go:495] detecting cgroup driver to use...
	I0814 16:26:25.144579   31878 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 16:26:25.161158   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 16:26:25.174850   31878 docker.go:217] disabling cri-docker service (if available) ...
	I0814 16:26:25.174925   31878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 16:26:25.188000   31878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 16:26:25.200663   31878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 16:26:25.309694   31878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 16:26:25.448029   31878 docker.go:233] disabling docker service ...
	I0814 16:26:25.448099   31878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 16:26:25.462055   31878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 16:26:25.474404   31878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 16:26:25.606152   31878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 16:26:25.739595   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 16:26:25.752778   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 16:26:25.772085   31878 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 16:26:25.772151   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.782033   31878 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 16:26:25.782094   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.791657   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.801204   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.811708   31878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 16:26:25.821849   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.833758   31878 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.850966   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:26:25.862506   31878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 16:26:25.871925   31878 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 16:26:25.871982   31878 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 16:26:25.883834   31878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 16:26:25.893019   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:26:26.002392   31878 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 16:26:26.134593   31878 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 16:26:26.134672   31878 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 16:26:26.139386   31878 start.go:563] Will wait 60s for crictl version
	I0814 16:26:26.139468   31878 ssh_runner.go:195] Run: which crictl
	I0814 16:26:26.142753   31878 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 16:26:26.179459   31878 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 16:26:26.179557   31878 ssh_runner.go:195] Run: crio --version
	I0814 16:26:26.204792   31878 ssh_runner.go:195] Run: crio --version
	I0814 16:26:26.232170   31878 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 16:26:26.233559   31878 out.go:177]   - env NO_PROXY=192.168.39.4
	I0814 16:26:26.234736   31878 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:26:26.237356   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:26.237735   31878 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:26:16 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:26:26.237759   31878 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:26:26.237991   31878 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 16:26:26.241851   31878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:26:26.253196   31878 mustload.go:65] Loading cluster: ha-597780
	I0814 16:26:26.253368   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:26:26.253614   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:26.253648   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:26.269329   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36415
	I0814 16:26:26.269734   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:26.270248   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:26.270272   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:26.270645   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:26.270813   31878 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:26:26.272621   31878 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:26:26.272989   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:26:26.273013   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:26:26.287349   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0814 16:26:26.287789   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:26:26.288195   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:26:26.288213   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:26:26.288518   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:26:26.288717   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:26:26.288862   31878 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780 for IP: 192.168.39.225
	I0814 16:26:26.288871   31878 certs.go:194] generating shared ca certs ...
	I0814 16:26:26.288884   31878 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:26:26.288990   31878 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 16:26:26.289031   31878 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 16:26:26.289040   31878 certs.go:256] generating profile certs ...
	I0814 16:26:26.289116   31878 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key
	I0814 16:26:26.289139   31878 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.4cd622c9
	I0814 16:26:26.289150   31878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.4cd622c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.225 192.168.39.254]
	I0814 16:26:26.631706   31878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.4cd622c9 ...
	I0814 16:26:26.631738   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.4cd622c9: {Name:mk28e0b5520bad73e9acb336a4dd406a300487c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:26:26.631902   31878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.4cd622c9 ...
	I0814 16:26:26.631916   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.4cd622c9: {Name:mk9354ebb43811e70c9c7fd083d8203d518d0483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:26:26.631988   31878 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.4cd622c9 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt
	I0814 16:26:26.632110   31878 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.4cd622c9 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key
	I0814 16:26:26.632230   31878 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key
	I0814 16:26:26.632244   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 16:26:26.632259   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 16:26:26.632273   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 16:26:26.632285   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 16:26:26.632298   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 16:26:26.632311   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 16:26:26.632325   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 16:26:26.632344   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 16:26:26.632393   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 16:26:26.632420   31878 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 16:26:26.632428   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 16:26:26.632448   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 16:26:26.632469   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 16:26:26.632490   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 16:26:26.632524   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:26:26.632549   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:26:26.632563   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem -> /usr/share/ca-certificates/21177.pem
	I0814 16:26:26.632576   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /usr/share/ca-certificates/211772.pem
	I0814 16:26:26.632620   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:26:26.636176   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:26:26.636669   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:26:26.636698   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:26:26.636893   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:26:26.637117   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:26:26.637328   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:26:26.637506   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:26:26.707707   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0814 16:26:26.712554   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0814 16:26:26.723113   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0814 16:26:26.727019   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0814 16:26:26.736104   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0814 16:26:26.739655   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0814 16:26:26.749033   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0814 16:26:26.752517   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0814 16:26:26.761793   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0814 16:26:26.765297   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0814 16:26:26.774268   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0814 16:26:26.777951   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0814 16:26:26.787338   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 16:26:26.811883   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 16:26:26.834838   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 16:26:26.857013   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 16:26:26.879464   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0814 16:26:26.901765   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 16:26:26.924506   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 16:26:26.947630   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 16:26:26.969205   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 16:26:26.991149   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 16:26:27.013345   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 16:26:27.035377   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0814 16:26:27.050880   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0814 16:26:27.066377   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0814 16:26:27.081683   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0814 16:26:27.096857   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0814 16:26:27.112524   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0814 16:26:27.127831   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0814 16:26:27.142895   31878 ssh_runner.go:195] Run: openssl version
	I0814 16:26:27.148302   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 16:26:27.165660   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:26:27.171363   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:26:27.171425   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:26:27.177040   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 16:26:27.187235   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 16:26:27.197276   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 16:26:27.201413   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 16:26:27.201473   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 16:26:27.206740   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 16:26:27.216704   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 16:26:27.226806   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 16:26:27.231042   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 16:26:27.231104   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 16:26:27.236505   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 16:26:27.247039   31878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 16:26:27.250817   31878 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 16:26:27.250866   31878 kubeadm.go:934] updating node {m02 192.168.39.225 8443 v1.31.0 crio true true} ...
	I0814 16:26:27.250943   31878 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-597780-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 16:26:27.250967   31878 kube-vip.go:115] generating kube-vip config ...
	I0814 16:26:27.251000   31878 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0814 16:26:27.268414   31878 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0814 16:26:27.268507   31878 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0814 16:26:27.268578   31878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 16:26:27.278118   31878 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0814 16:26:27.278186   31878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0814 16:26:27.287145   31878 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0814 16:26:27.287172   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0814 16:26:27.287214   31878 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0814 16:26:27.287239   31878 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0814 16:26:27.287243   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0814 16:26:27.291107   31878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0814 16:26:27.291150   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0814 16:27:00.804434   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0814 16:27:00.804518   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0814 16:27:00.809548   31878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0814 16:27:00.809588   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0814 16:27:13.221675   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:27:13.236972   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0814 16:27:13.237073   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0814 16:27:13.241559   31878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0814 16:27:13.241590   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0814 16:27:13.538950   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0814 16:27:13.547991   31878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0814 16:27:13.563482   31878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 16:27:13.579692   31878 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0814 16:27:13.594987   31878 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0814 16:27:13.598421   31878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:27:13.609968   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:27:13.735676   31878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:27:13.751272   31878 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:27:13.751769   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:27:13.751828   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:27:13.768034   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44469
	I0814 16:27:13.768463   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:27:13.769009   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:27:13.769038   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:27:13.769368   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:27:13.769543   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:27:13.769708   31878 start.go:317] joinCluster: &{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:27:13.769820   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0814 16:27:13.769837   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:27:13.772793   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:27:13.773199   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:27:13.773221   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:27:13.773393   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:27:13.773561   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:27:13.773709   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:27:13.773845   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:27:13.920227   31878 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:27:13.920280   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zaslr5.s1i9whjerq2tnrrc --discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-597780-m02 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443"
	I0814 16:27:35.956068   31878 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zaslr5.s1i9whjerq2tnrrc --discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-597780-m02 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443": (22.035764529s)
	I0814 16:27:35.956111   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0814 16:27:36.529697   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-597780-m02 minikube.k8s.io/updated_at=2024_08_14T16_27_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=ha-597780 minikube.k8s.io/primary=false
	I0814 16:27:36.645864   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-597780-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0814 16:27:36.787680   31878 start.go:319] duration metric: took 23.017968041s to joinCluster
	I0814 16:27:36.787754   31878 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:27:36.788078   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:27:36.789164   31878 out.go:177] * Verifying Kubernetes components...
	I0814 16:27:36.790415   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:27:37.054953   31878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:27:37.109578   31878 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:27:37.109807   31878 kapi.go:59] client config for ha-597780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key", CAFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f170c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0814 16:27:37.109861   31878 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.4:8443
	I0814 16:27:37.110035   31878 node_ready.go:35] waiting up to 6m0s for node "ha-597780-m02" to be "Ready" ...
	I0814 16:27:37.110118   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:37.110126   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:37.110132   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:37.110138   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:37.121900   31878 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0814 16:27:37.611026   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:37.611059   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:37.611071   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:37.611077   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:37.614806   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:38.110646   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:38.110665   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:38.110673   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:38.110679   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:38.132949   31878 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0814 16:27:38.610329   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:38.610352   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:38.610360   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:38.610364   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:38.613740   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:39.111009   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:39.111034   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:39.111042   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:39.111048   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:39.114115   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:39.114632   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:39.611071   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:39.611098   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:39.611109   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:39.611114   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:39.614635   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:40.110569   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:40.110604   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:40.110616   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:40.110623   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:40.113861   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:40.610273   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:40.610294   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:40.610302   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:40.610306   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:40.614230   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:41.110371   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:41.110394   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:41.110410   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:41.110415   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:41.113897   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:41.610972   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:41.610996   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:41.611005   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:41.611010   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:41.613977   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:41.614505   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:42.110441   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:42.110468   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:42.110480   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:42.110487   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:42.114186   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:42.610662   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:42.610750   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:42.610765   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:42.610772   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:42.618561   31878 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0814 16:27:43.110582   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:43.110603   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:43.110614   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:43.110618   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:43.115137   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:27:43.611032   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:43.611054   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:43.611062   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:43.611065   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:43.614576   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:43.615136   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:44.111097   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:44.111121   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:44.111133   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:44.111138   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:44.117130   31878 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0814 16:27:44.610994   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:44.611035   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:44.611050   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:44.611055   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:44.614384   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:45.110195   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:45.110217   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:45.110225   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:45.110229   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:45.113002   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:45.610758   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:45.610779   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:45.610787   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:45.610792   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:45.614350   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:46.110258   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:46.110285   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:46.110296   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:46.110300   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:46.113875   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:46.114503   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:46.611108   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:46.611132   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:46.611140   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:46.611143   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:46.614624   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:47.110971   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:47.110995   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:47.111003   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:47.111007   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:47.114336   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:47.610930   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:47.610956   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:47.610964   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:47.610968   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:47.614255   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:48.110683   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:48.110707   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:48.110714   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:48.110720   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:48.114267   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:48.114881   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:48.610255   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:48.610278   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:48.610286   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:48.610292   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:48.613746   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:49.111090   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:49.111110   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:49.111118   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:49.111121   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:49.114348   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:49.611204   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:49.611229   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:49.611238   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:49.611243   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:49.614666   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:50.110595   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:50.110627   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:50.110712   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:50.110741   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:50.114172   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:50.611218   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:50.611243   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:50.611254   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:50.611259   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:50.614323   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:50.614761   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:51.111213   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:51.111233   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:51.111241   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:51.111244   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:51.114272   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:51.610249   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:51.610272   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:51.610280   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:51.610284   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:51.613721   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:52.110987   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:52.111011   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:52.111024   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:52.111029   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:52.115084   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:27:52.610457   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:52.610484   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:52.610496   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:52.610503   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:52.613998   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:53.111009   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:53.111039   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:53.111050   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:53.111055   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:53.114883   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:53.115623   31878 node_ready.go:53] node "ha-597780-m02" has status "Ready":"False"
	I0814 16:27:53.611121   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:53.611149   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:53.611160   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:53.611166   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:53.614744   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:54.111036   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:54.111063   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.111071   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.111074   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.114369   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:54.610298   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:54.610321   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.610329   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.610334   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.614198   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:54.614633   31878 node_ready.go:49] node "ha-597780-m02" has status "Ready":"True"
	I0814 16:27:54.614651   31878 node_ready.go:38] duration metric: took 17.504589975s for node "ha-597780-m02" to be "Ready" ...
	I0814 16:27:54.614659   31878 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:27:54.614735   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:27:54.614757   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.614764   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.614770   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.618779   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:54.624568   31878 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-28k2m" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.624654   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-28k2m
	I0814 16:27:54.624662   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.624670   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.624674   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.627406   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.628063   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:54.628077   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.628085   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.628088   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.630282   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.630905   31878 pod_ready.go:92] pod "coredns-6f6b679f8f-28k2m" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:54.630925   31878 pod_ready.go:81] duration metric: took 6.334777ms for pod "coredns-6f6b679f8f-28k2m" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.630935   31878 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-kc84b" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.630993   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-kc84b
	I0814 16:27:54.631003   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.631012   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.631019   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.633363   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.633981   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:54.633995   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.634003   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.634007   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.636060   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.636563   31878 pod_ready.go:92] pod "coredns-6f6b679f8f-kc84b" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:54.636577   31878 pod_ready.go:81] duration metric: took 5.635779ms for pod "coredns-6f6b679f8f-kc84b" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.636585   31878 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.636634   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-597780
	I0814 16:27:54.636642   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.636648   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.636651   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.639135   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.639918   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:54.639940   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.639951   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.639956   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.642170   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.642638   31878 pod_ready.go:92] pod "etcd-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:54.642657   31878 pod_ready.go:81] duration metric: took 6.066171ms for pod "etcd-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.642666   31878 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.642718   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-597780-m02
	I0814 16:27:54.642730   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.642739   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.642744   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.644933   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.645402   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:54.645416   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.645426   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.645431   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.647687   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:54.648147   31878 pod_ready.go:92] pod "etcd-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:54.648163   31878 pod_ready.go:81] duration metric: took 5.490635ms for pod "etcd-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.648178   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:54.810504   31878 request.go:632] Waited for 162.250358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780
	I0814 16:27:54.810602   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780
	I0814 16:27:54.810609   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:54.810617   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:54.810626   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:54.814213   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:55.011205   31878 request.go:632] Waited for 196.4183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:55.011305   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:55.011315   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:55.011339   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:55.011346   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:55.014514   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:55.015015   31878 pod_ready.go:92] pod "kube-apiserver-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:55.015033   31878 pod_ready.go:81] duration metric: took 366.849185ms for pod "kube-apiserver-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:55.015046   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:55.211178   31878 request.go:632] Waited for 196.066291ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780-m02
	I0814 16:27:55.211243   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780-m02
	I0814 16:27:55.211249   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:55.211259   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:55.211265   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:55.214793   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:55.410788   31878 request.go:632] Waited for 195.364944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:55.410852   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:55.410861   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:55.410874   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:55.410883   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:55.413944   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:55.414382   31878 pod_ready.go:92] pod "kube-apiserver-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:55.414403   31878 pod_ready.go:81] duration metric: took 399.349092ms for pod "kube-apiserver-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:55.414413   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:55.610403   31878 request.go:632] Waited for 195.913912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780
	I0814 16:27:55.610464   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780
	I0814 16:27:55.610469   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:55.610477   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:55.610491   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:55.614356   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:55.810463   31878 request.go:632] Waited for 195.275583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:55.810557   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:55.810565   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:55.810574   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:55.810580   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:55.814316   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:55.814901   31878 pod_ready.go:92] pod "kube-controller-manager-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:55.814921   31878 pod_ready.go:81] duration metric: took 400.495173ms for pod "kube-controller-manager-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:55.814931   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:56.011028   31878 request.go:632] Waited for 196.039511ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780-m02
	I0814 16:27:56.011114   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780-m02
	I0814 16:27:56.011125   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:56.011137   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:56.011148   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:56.014324   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:56.211256   31878 request.go:632] Waited for 196.346648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:56.211320   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:56.211343   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:56.211355   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:56.211359   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:56.214448   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:56.215132   31878 pod_ready.go:92] pod "kube-controller-manager-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:56.215149   31878 pod_ready.go:81] duration metric: took 400.212519ms for pod "kube-controller-manager-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:56.215158   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4q2dq" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:56.411154   31878 request.go:632] Waited for 195.907518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q2dq
	I0814 16:27:56.411218   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q2dq
	I0814 16:27:56.411226   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:56.411236   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:56.411244   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:56.414675   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:56.610587   31878 request.go:632] Waited for 195.328171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:56.610642   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:56.610647   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:56.610654   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:56.610659   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:56.614199   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:56.614650   31878 pod_ready.go:92] pod "kube-proxy-4q2dq" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:56.614667   31878 pod_ready.go:81] duration metric: took 399.503285ms for pod "kube-proxy-4q2dq" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:56.614677   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-79txl" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:56.811033   31878 request.go:632] Waited for 196.298948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-79txl
	I0814 16:27:56.811111   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-79txl
	I0814 16:27:56.811118   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:56.811126   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:56.811134   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:56.814148   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:27:57.011077   31878 request.go:632] Waited for 196.348399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:57.011130   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:57.011135   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:57.011143   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:57.011147   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:57.014362   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:57.014957   31878 pod_ready.go:92] pod "kube-proxy-79txl" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:57.014977   31878 pod_ready.go:81] duration metric: took 400.293753ms for pod "kube-proxy-79txl" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:57.014985   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:57.211046   31878 request.go:632] Waited for 196.001751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780
	I0814 16:27:57.211104   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780
	I0814 16:27:57.211111   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:57.211121   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:57.211129   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:57.214469   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:57.410405   31878 request.go:632] Waited for 195.287753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:57.410470   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:27:57.410475   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:57.410487   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:57.410491   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:57.413675   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:57.414448   31878 pod_ready.go:92] pod "kube-scheduler-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:57.414471   31878 pod_ready.go:81] duration metric: took 399.477679ms for pod "kube-scheduler-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:57.414481   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:57.610903   31878 request.go:632] Waited for 196.365721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780-m02
	I0814 16:27:57.610978   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780-m02
	I0814 16:27:57.610990   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:57.611003   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:57.611011   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:57.614595   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:57.810846   31878 request.go:632] Waited for 195.360436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:57.810900   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:27:57.810904   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:57.810911   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:57.810915   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:57.814792   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:57.815436   31878 pod_ready.go:92] pod "kube-scheduler-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:27:57.815455   31878 pod_ready.go:81] duration metric: took 400.968481ms for pod "kube-scheduler-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:27:57.815466   31878 pod_ready.go:38] duration metric: took 3.20079656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:27:57.815478   31878 api_server.go:52] waiting for apiserver process to appear ...
	I0814 16:27:57.815532   31878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:27:57.830562   31878 api_server.go:72] duration metric: took 21.042773881s to wait for apiserver process to appear ...
	I0814 16:27:57.830587   31878 api_server.go:88] waiting for apiserver healthz status ...
	I0814 16:27:57.830604   31878 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I0814 16:27:57.838936   31878 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I0814 16:27:57.839023   31878 round_trippers.go:463] GET https://192.168.39.4:8443/version
	I0814 16:27:57.839036   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:57.839045   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:57.839050   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:57.839901   31878 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0814 16:27:57.840004   31878 api_server.go:141] control plane version: v1.31.0
	I0814 16:27:57.840019   31878 api_server.go:131] duration metric: took 9.426657ms to wait for apiserver health ...
	I0814 16:27:57.840026   31878 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 16:27:58.010362   31878 request.go:632] Waited for 170.272025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:27:58.010442   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:27:58.010448   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:58.010460   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:58.010467   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:58.014912   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:27:58.020634   31878 system_pods.go:59] 17 kube-system pods found
	I0814 16:27:58.020664   31878 system_pods.go:61] "coredns-6f6b679f8f-28k2m" [ec3725c1-3e21-49b0-9caf-922ef1928ed8] Running
	I0814 16:27:58.020671   31878 system_pods.go:61] "coredns-6f6b679f8f-kc84b" [3a483f17-cab5-4090-abc6-808d84397a8a] Running
	I0814 16:27:58.020678   31878 system_pods.go:61] "etcd-ha-597780" [9af2f660-01fe-499f-902e-4988a5527c5a] Running
	I0814 16:27:58.020684   31878 system_pods.go:61] "etcd-ha-597780-m02" [c811879c-cf46-4c5b-aec2-6fa9aae64d13] Running
	I0814 16:27:58.020688   31878 system_pods.go:61] "kindnet-c8f8r" [b053dfba-820a-416f-9233-ececd7159e1e] Running
	I0814 16:27:58.020691   31878 system_pods.go:61] "kindnet-zm75h" [1e5eabaf-5973-4658-b12b-f7faf67b8af7] Running
	I0814 16:27:58.020694   31878 system_pods.go:61] "kube-apiserver-ha-597780" [8efb614b-9a4f-4029-aba3-e2183fb20627] Running
	I0814 16:27:58.020698   31878 system_pods.go:61] "kube-apiserver-ha-597780-m02" [26d7d4c8-6f40-4217-bf24-f9f94c9f8a79] Running
	I0814 16:27:58.020701   31878 system_pods.go:61] "kube-controller-manager-ha-597780" [ad59b322-ee34-4041-af68-8b5ffcdff9dd] Running
	I0814 16:27:58.020705   31878 system_pods.go:61] "kube-controller-manager-ha-597780-m02" [a25ce1a0-cedb-40cd-ade3-ba63a4b69cd4] Running
	I0814 16:27:58.020709   31878 system_pods.go:61] "kube-proxy-4q2dq" [9e95547c-001c-4942-b160-33e37a389820] Running
	I0814 16:27:58.020715   31878 system_pods.go:61] "kube-proxy-79txl" [ea48ab09-60d5-4133-accc-f3fd69a50c5d] Running
	I0814 16:27:58.020718   31878 system_pods.go:61] "kube-scheduler-ha-597780" [c1576ee1-5aed-4177-b37e-76786ceee1a1] Running
	I0814 16:27:58.020721   31878 system_pods.go:61] "kube-scheduler-ha-597780-m02" [cb250902-8200-423a-8bd3-463aebd7379c] Running
	I0814 16:27:58.020724   31878 system_pods.go:61] "kube-vip-ha-597780" [a5738727-b1a0-4750-9e02-784278225ee4] Running
	I0814 16:27:58.020727   31878 system_pods.go:61] "kube-vip-ha-597780-m02" [c2f92dd8-8248-44a7-bc10-a91546e50eb9] Running
	I0814 16:27:58.020733   31878 system_pods.go:61] "storage-provisioner" [9939439d-cddd-4505-b554-b72f749269fd] Running
	I0814 16:27:58.020738   31878 system_pods.go:74] duration metric: took 180.705381ms to wait for pod list to return data ...
	I0814 16:27:58.020745   31878 default_sa.go:34] waiting for default service account to be created ...
	I0814 16:27:58.211158   31878 request.go:632] Waited for 190.329272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0814 16:27:58.211222   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0814 16:27:58.211227   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:58.211234   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:58.211237   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:58.215157   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:58.215419   31878 default_sa.go:45] found service account: "default"
	I0814 16:27:58.215438   31878 default_sa.go:55] duration metric: took 194.686453ms for default service account to be created ...
	I0814 16:27:58.215452   31878 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 16:27:58.410868   31878 request.go:632] Waited for 195.353496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:27:58.410924   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:27:58.410930   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:58.410938   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:58.410941   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:58.415415   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:27:58.420260   31878 system_pods.go:86] 17 kube-system pods found
	I0814 16:27:58.420285   31878 system_pods.go:89] "coredns-6f6b679f8f-28k2m" [ec3725c1-3e21-49b0-9caf-922ef1928ed8] Running
	I0814 16:27:58.420291   31878 system_pods.go:89] "coredns-6f6b679f8f-kc84b" [3a483f17-cab5-4090-abc6-808d84397a8a] Running
	I0814 16:27:58.420295   31878 system_pods.go:89] "etcd-ha-597780" [9af2f660-01fe-499f-902e-4988a5527c5a] Running
	I0814 16:27:58.420299   31878 system_pods.go:89] "etcd-ha-597780-m02" [c811879c-cf46-4c5b-aec2-6fa9aae64d13] Running
	I0814 16:27:58.420303   31878 system_pods.go:89] "kindnet-c8f8r" [b053dfba-820a-416f-9233-ececd7159e1e] Running
	I0814 16:27:58.420307   31878 system_pods.go:89] "kindnet-zm75h" [1e5eabaf-5973-4658-b12b-f7faf67b8af7] Running
	I0814 16:27:58.420311   31878 system_pods.go:89] "kube-apiserver-ha-597780" [8efb614b-9a4f-4029-aba3-e2183fb20627] Running
	I0814 16:27:58.420316   31878 system_pods.go:89] "kube-apiserver-ha-597780-m02" [26d7d4c8-6f40-4217-bf24-f9f94c9f8a79] Running
	I0814 16:27:58.420322   31878 system_pods.go:89] "kube-controller-manager-ha-597780" [ad59b322-ee34-4041-af68-8b5ffcdff9dd] Running
	I0814 16:27:58.420328   31878 system_pods.go:89] "kube-controller-manager-ha-597780-m02" [a25ce1a0-cedb-40cd-ade3-ba63a4b69cd4] Running
	I0814 16:27:58.420334   31878 system_pods.go:89] "kube-proxy-4q2dq" [9e95547c-001c-4942-b160-33e37a389820] Running
	I0814 16:27:58.420349   31878 system_pods.go:89] "kube-proxy-79txl" [ea48ab09-60d5-4133-accc-f3fd69a50c5d] Running
	I0814 16:27:58.420359   31878 system_pods.go:89] "kube-scheduler-ha-597780" [c1576ee1-5aed-4177-b37e-76786ceee1a1] Running
	I0814 16:27:58.420363   31878 system_pods.go:89] "kube-scheduler-ha-597780-m02" [cb250902-8200-423a-8bd3-463aebd7379c] Running
	I0814 16:27:58.420367   31878 system_pods.go:89] "kube-vip-ha-597780" [a5738727-b1a0-4750-9e02-784278225ee4] Running
	I0814 16:27:58.420371   31878 system_pods.go:89] "kube-vip-ha-597780-m02" [c2f92dd8-8248-44a7-bc10-a91546e50eb9] Running
	I0814 16:27:58.420374   31878 system_pods.go:89] "storage-provisioner" [9939439d-cddd-4505-b554-b72f749269fd] Running
	I0814 16:27:58.420379   31878 system_pods.go:126] duration metric: took 204.92215ms to wait for k8s-apps to be running ...
	I0814 16:27:58.420388   31878 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 16:27:58.420440   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:27:58.436102   31878 system_svc.go:56] duration metric: took 15.704365ms WaitForService to wait for kubelet
	I0814 16:27:58.436138   31878 kubeadm.go:582] duration metric: took 21.648350486s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:27:58.436161   31878 node_conditions.go:102] verifying NodePressure condition ...
	I0814 16:27:58.610643   31878 request.go:632] Waited for 174.374721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes
	I0814 16:27:58.610709   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes
	I0814 16:27:58.610716   31878 round_trippers.go:469] Request Headers:
	I0814 16:27:58.610725   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:27:58.610731   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:27:58.614510   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:27:58.615527   31878 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 16:27:58.615554   31878 node_conditions.go:123] node cpu capacity is 2
	I0814 16:27:58.615567   31878 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 16:27:58.615576   31878 node_conditions.go:123] node cpu capacity is 2
	I0814 16:27:58.615582   31878 node_conditions.go:105] duration metric: took 179.415269ms to run NodePressure ...
	I0814 16:27:58.615598   31878 start.go:241] waiting for startup goroutines ...
	I0814 16:27:58.615631   31878 start.go:255] writing updated cluster config ...
	I0814 16:27:58.617709   31878 out.go:177] 
	I0814 16:27:58.619059   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:27:58.619159   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:27:58.620858   31878 out.go:177] * Starting "ha-597780-m03" control-plane node in "ha-597780" cluster
	I0814 16:27:58.621933   31878 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:27:58.621951   31878 cache.go:56] Caching tarball of preloaded images
	I0814 16:27:58.622043   31878 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 16:27:58.622054   31878 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 16:27:58.622132   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:27:58.622289   31878 start.go:360] acquireMachinesLock for ha-597780-m03: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 16:27:58.622326   31878 start.go:364] duration metric: took 20.192µs to acquireMachinesLock for "ha-597780-m03"
	I0814 16:27:58.622344   31878 start.go:93] Provisioning new machine with config: &{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:27:58.622430   31878 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0814 16:27:58.623962   31878 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 16:27:58.624082   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:27:58.624116   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:27:58.639175   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38571
	I0814 16:27:58.639655   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:27:58.640088   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:27:58.640108   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:27:58.640444   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:27:58.640606   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetMachineName
	I0814 16:27:58.640754   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:27:58.640907   31878 start.go:159] libmachine.API.Create for "ha-597780" (driver="kvm2")
	I0814 16:27:58.640932   31878 client.go:168] LocalClient.Create starting
	I0814 16:27:58.640963   31878 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem
	I0814 16:27:58.640993   31878 main.go:141] libmachine: Decoding PEM data...
	I0814 16:27:58.641005   31878 main.go:141] libmachine: Parsing certificate...
	I0814 16:27:58.641050   31878 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem
	I0814 16:27:58.641070   31878 main.go:141] libmachine: Decoding PEM data...
	I0814 16:27:58.641080   31878 main.go:141] libmachine: Parsing certificate...
	I0814 16:27:58.641096   31878 main.go:141] libmachine: Running pre-create checks...
	I0814 16:27:58.641104   31878 main.go:141] libmachine: (ha-597780-m03) Calling .PreCreateCheck
	I0814 16:27:58.641289   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetConfigRaw
	I0814 16:27:58.641688   31878 main.go:141] libmachine: Creating machine...
	I0814 16:27:58.641705   31878 main.go:141] libmachine: (ha-597780-m03) Calling .Create
	I0814 16:27:58.641838   31878 main.go:141] libmachine: (ha-597780-m03) Creating KVM machine...
	I0814 16:27:58.643018   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found existing default KVM network
	I0814 16:27:58.643130   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found existing private KVM network mk-ha-597780
	I0814 16:27:58.643232   31878 main.go:141] libmachine: (ha-597780-m03) Setting up store path in /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03 ...
	I0814 16:27:58.643262   31878 main.go:141] libmachine: (ha-597780-m03) Building disk image from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 16:27:58.643341   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:27:58.643236   32824 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:27:58.643459   31878 main.go:141] libmachine: (ha-597780-m03) Downloading /home/jenkins/minikube-integration/19446-13977/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 16:27:58.873533   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:27:58.873405   32824 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa...
	I0814 16:27:59.244602   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:27:59.244468   32824 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/ha-597780-m03.rawdisk...
	I0814 16:27:59.244636   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Writing magic tar header
	I0814 16:27:59.244655   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Writing SSH key tar header
	I0814 16:27:59.244671   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:27:59.244637   32824 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03 ...
	I0814 16:27:59.244805   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03
	I0814 16:27:59.244831   31878 main.go:141] libmachine: (ha-597780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03 (perms=drwx------)
	I0814 16:27:59.244839   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines
	I0814 16:27:59.244853   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:27:59.244866   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977
	I0814 16:27:59.244882   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 16:27:59.244893   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home/jenkins
	I0814 16:27:59.244906   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Checking permissions on dir: /home
	I0814 16:27:59.244920   31878 main.go:141] libmachine: (ha-597780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines (perms=drwxr-xr-x)
	I0814 16:27:59.244928   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Skipping /home - not owner
	I0814 16:27:59.244943   31878 main.go:141] libmachine: (ha-597780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube (perms=drwxr-xr-x)
	I0814 16:27:59.244956   31878 main.go:141] libmachine: (ha-597780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977 (perms=drwxrwxr-x)
	I0814 16:27:59.244971   31878 main.go:141] libmachine: (ha-597780-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 16:27:59.244983   31878 main.go:141] libmachine: (ha-597780-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 16:27:59.244996   31878 main.go:141] libmachine: (ha-597780-m03) Creating domain...
	I0814 16:27:59.245921   31878 main.go:141] libmachine: (ha-597780-m03) define libvirt domain using xml: 
	I0814 16:27:59.245940   31878 main.go:141] libmachine: (ha-597780-m03) <domain type='kvm'>
	I0814 16:27:59.245946   31878 main.go:141] libmachine: (ha-597780-m03)   <name>ha-597780-m03</name>
	I0814 16:27:59.245952   31878 main.go:141] libmachine: (ha-597780-m03)   <memory unit='MiB'>2200</memory>
	I0814 16:27:59.245958   31878 main.go:141] libmachine: (ha-597780-m03)   <vcpu>2</vcpu>
	I0814 16:27:59.245966   31878 main.go:141] libmachine: (ha-597780-m03)   <features>
	I0814 16:27:59.245994   31878 main.go:141] libmachine: (ha-597780-m03)     <acpi/>
	I0814 16:27:59.246017   31878 main.go:141] libmachine: (ha-597780-m03)     <apic/>
	I0814 16:27:59.246026   31878 main.go:141] libmachine: (ha-597780-m03)     <pae/>
	I0814 16:27:59.246034   31878 main.go:141] libmachine: (ha-597780-m03)     
	I0814 16:27:59.246046   31878 main.go:141] libmachine: (ha-597780-m03)   </features>
	I0814 16:27:59.246061   31878 main.go:141] libmachine: (ha-597780-m03)   <cpu mode='host-passthrough'>
	I0814 16:27:59.246072   31878 main.go:141] libmachine: (ha-597780-m03)   
	I0814 16:27:59.246083   31878 main.go:141] libmachine: (ha-597780-m03)   </cpu>
	I0814 16:27:59.246117   31878 main.go:141] libmachine: (ha-597780-m03)   <os>
	I0814 16:27:59.246141   31878 main.go:141] libmachine: (ha-597780-m03)     <type>hvm</type>
	I0814 16:27:59.246154   31878 main.go:141] libmachine: (ha-597780-m03)     <boot dev='cdrom'/>
	I0814 16:27:59.246170   31878 main.go:141] libmachine: (ha-597780-m03)     <boot dev='hd'/>
	I0814 16:27:59.246179   31878 main.go:141] libmachine: (ha-597780-m03)     <bootmenu enable='no'/>
	I0814 16:27:59.246186   31878 main.go:141] libmachine: (ha-597780-m03)   </os>
	I0814 16:27:59.246191   31878 main.go:141] libmachine: (ha-597780-m03)   <devices>
	I0814 16:27:59.246198   31878 main.go:141] libmachine: (ha-597780-m03)     <disk type='file' device='cdrom'>
	I0814 16:27:59.246207   31878 main.go:141] libmachine: (ha-597780-m03)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/boot2docker.iso'/>
	I0814 16:27:59.246214   31878 main.go:141] libmachine: (ha-597780-m03)       <target dev='hdc' bus='scsi'/>
	I0814 16:27:59.246225   31878 main.go:141] libmachine: (ha-597780-m03)       <readonly/>
	I0814 16:27:59.246238   31878 main.go:141] libmachine: (ha-597780-m03)     </disk>
	I0814 16:27:59.246251   31878 main.go:141] libmachine: (ha-597780-m03)     <disk type='file' device='disk'>
	I0814 16:27:59.246269   31878 main.go:141] libmachine: (ha-597780-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 16:27:59.246299   31878 main.go:141] libmachine: (ha-597780-m03)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/ha-597780-m03.rawdisk'/>
	I0814 16:27:59.246318   31878 main.go:141] libmachine: (ha-597780-m03)       <target dev='hda' bus='virtio'/>
	I0814 16:27:59.246331   31878 main.go:141] libmachine: (ha-597780-m03)     </disk>
	I0814 16:27:59.246340   31878 main.go:141] libmachine: (ha-597780-m03)     <interface type='network'>
	I0814 16:27:59.246354   31878 main.go:141] libmachine: (ha-597780-m03)       <source network='mk-ha-597780'/>
	I0814 16:27:59.246366   31878 main.go:141] libmachine: (ha-597780-m03)       <model type='virtio'/>
	I0814 16:27:59.246378   31878 main.go:141] libmachine: (ha-597780-m03)     </interface>
	I0814 16:27:59.246393   31878 main.go:141] libmachine: (ha-597780-m03)     <interface type='network'>
	I0814 16:27:59.246413   31878 main.go:141] libmachine: (ha-597780-m03)       <source network='default'/>
	I0814 16:27:59.246424   31878 main.go:141] libmachine: (ha-597780-m03)       <model type='virtio'/>
	I0814 16:27:59.246434   31878 main.go:141] libmachine: (ha-597780-m03)     </interface>
	I0814 16:27:59.246445   31878 main.go:141] libmachine: (ha-597780-m03)     <serial type='pty'>
	I0814 16:27:59.246457   31878 main.go:141] libmachine: (ha-597780-m03)       <target port='0'/>
	I0814 16:27:59.246471   31878 main.go:141] libmachine: (ha-597780-m03)     </serial>
	I0814 16:27:59.246485   31878 main.go:141] libmachine: (ha-597780-m03)     <console type='pty'>
	I0814 16:27:59.246498   31878 main.go:141] libmachine: (ha-597780-m03)       <target type='serial' port='0'/>
	I0814 16:27:59.246507   31878 main.go:141] libmachine: (ha-597780-m03)     </console>
	I0814 16:27:59.246517   31878 main.go:141] libmachine: (ha-597780-m03)     <rng model='virtio'>
	I0814 16:27:59.246535   31878 main.go:141] libmachine: (ha-597780-m03)       <backend model='random'>/dev/random</backend>
	I0814 16:27:59.246553   31878 main.go:141] libmachine: (ha-597780-m03)     </rng>
	I0814 16:27:59.246568   31878 main.go:141] libmachine: (ha-597780-m03)     
	I0814 16:27:59.246585   31878 main.go:141] libmachine: (ha-597780-m03)     
	I0814 16:27:59.246596   31878 main.go:141] libmachine: (ha-597780-m03)   </devices>
	I0814 16:27:59.246604   31878 main.go:141] libmachine: (ha-597780-m03) </domain>
	I0814 16:27:59.246618   31878 main.go:141] libmachine: (ha-597780-m03) 
	I0814 16:27:59.253221   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:ab:73:8c in network default
	I0814 16:27:59.253785   31878 main.go:141] libmachine: (ha-597780-m03) Ensuring networks are active...
	I0814 16:27:59.253807   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:27:59.254373   31878 main.go:141] libmachine: (ha-597780-m03) Ensuring network default is active
	I0814 16:27:59.254656   31878 main.go:141] libmachine: (ha-597780-m03) Ensuring network mk-ha-597780 is active
	I0814 16:27:59.254932   31878 main.go:141] libmachine: (ha-597780-m03) Getting domain xml...
	I0814 16:27:59.255562   31878 main.go:141] libmachine: (ha-597780-m03) Creating domain...
	I0814 16:28:00.490190   31878 main.go:141] libmachine: (ha-597780-m03) Waiting to get IP...
	I0814 16:28:00.491016   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:00.491434   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:00.491492   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:00.491433   32824 retry.go:31] will retry after 215.668377ms: waiting for machine to come up
	I0814 16:28:00.708783   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:00.709192   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:00.709219   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:00.709143   32824 retry.go:31] will retry after 287.449412ms: waiting for machine to come up
	I0814 16:28:00.998673   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:00.999161   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:00.999183   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:00.999112   32824 retry.go:31] will retry after 410.594458ms: waiting for machine to come up
	I0814 16:28:01.411675   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:01.412228   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:01.412254   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:01.412208   32824 retry.go:31] will retry after 440.346851ms: waiting for machine to come up
	I0814 16:28:01.853631   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:01.854118   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:01.854147   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:01.854057   32824 retry.go:31] will retry after 736.037125ms: waiting for machine to come up
	I0814 16:28:02.591534   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:02.591947   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:02.591971   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:02.591908   32824 retry.go:31] will retry after 760.455251ms: waiting for machine to come up
	I0814 16:28:03.353918   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:03.354326   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:03.354353   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:03.354291   32824 retry.go:31] will retry after 734.384806ms: waiting for machine to come up
	I0814 16:28:04.090570   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:04.091017   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:04.091046   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:04.090964   32824 retry.go:31] will retry after 990.16899ms: waiting for machine to come up
	I0814 16:28:05.083166   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:05.083604   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:05.083628   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:05.083577   32824 retry.go:31] will retry after 1.417341163s: waiting for machine to come up
	I0814 16:28:06.502131   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:06.502609   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:06.502655   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:06.502547   32824 retry.go:31] will retry after 2.204940468s: waiting for machine to come up
	I0814 16:28:08.709498   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:08.710102   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:08.710133   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:08.710046   32824 retry.go:31] will retry after 2.739628932s: waiting for machine to come up
	I0814 16:28:11.452942   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:11.453463   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:11.453492   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:11.453418   32824 retry.go:31] will retry after 2.200619257s: waiting for machine to come up
	I0814 16:28:13.655241   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:13.655869   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:13.655894   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:13.655818   32824 retry.go:31] will retry after 3.238883502s: waiting for machine to come up
	I0814 16:28:16.896282   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:16.896766   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find current IP address of domain ha-597780-m03 in network mk-ha-597780
	I0814 16:28:16.896793   31878 main.go:141] libmachine: (ha-597780-m03) DBG | I0814 16:28:16.896706   32824 retry.go:31] will retry after 3.559583358s: waiting for machine to come up
	I0814 16:28:20.457259   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.457783   31878 main.go:141] libmachine: (ha-597780-m03) Found IP for machine: 192.168.39.167
	I0814 16:28:20.457809   31878 main.go:141] libmachine: (ha-597780-m03) Reserving static IP address...
	I0814 16:28:20.457822   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has current primary IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.458181   31878 main.go:141] libmachine: (ha-597780-m03) DBG | unable to find host DHCP lease matching {name: "ha-597780-m03", mac: "52:54:00:e0:61:b4", ip: "192.168.39.167"} in network mk-ha-597780
	I0814 16:28:20.530929   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Getting to WaitForSSH function...
	I0814 16:28:20.530964   31878 main.go:141] libmachine: (ha-597780-m03) Reserved static IP address: 192.168.39.167
	I0814 16:28:20.530978   31878 main.go:141] libmachine: (ha-597780-m03) Waiting for SSH to be available...
	I0814 16:28:20.533511   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.533911   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:20.533941   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.534112   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Using SSH client type: external
	I0814 16:28:20.534137   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa (-rw-------)
	I0814 16:28:20.534156   31878 main.go:141] libmachine: (ha-597780-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 16:28:20.534166   31878 main.go:141] libmachine: (ha-597780-m03) DBG | About to run SSH command:
	I0814 16:28:20.534179   31878 main.go:141] libmachine: (ha-597780-m03) DBG | exit 0
	I0814 16:28:20.663661   31878 main.go:141] libmachine: (ha-597780-m03) DBG | SSH cmd err, output: <nil>: 
	I0814 16:28:20.663939   31878 main.go:141] libmachine: (ha-597780-m03) KVM machine creation complete!
	I0814 16:28:20.664255   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetConfigRaw
	I0814 16:28:20.664837   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:20.665037   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:20.665225   31878 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 16:28:20.665238   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetState
	I0814 16:28:20.666554   31878 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 16:28:20.666570   31878 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 16:28:20.666578   31878 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 16:28:20.666586   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:20.668811   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.669189   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:20.669216   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.669346   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:20.669486   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:20.669631   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:20.669762   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:20.669905   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:28:20.670091   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0814 16:28:20.670114   31878 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 16:28:20.778468   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:28:20.778492   31878 main.go:141] libmachine: Detecting the provisioner...
	I0814 16:28:20.778502   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:20.781208   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.781571   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:20.781601   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.781782   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:20.781968   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:20.782124   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:20.782244   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:20.782365   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:28:20.782530   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0814 16:28:20.782540   31878 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 16:28:20.892216   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 16:28:20.892280   31878 main.go:141] libmachine: found compatible host: buildroot
	I0814 16:28:20.892287   31878 main.go:141] libmachine: Provisioning with buildroot...
	I0814 16:28:20.892294   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetMachineName
	I0814 16:28:20.892572   31878 buildroot.go:166] provisioning hostname "ha-597780-m03"
	I0814 16:28:20.892600   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetMachineName
	I0814 16:28:20.892815   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:20.895596   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.896117   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:20.896146   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:20.896273   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:20.896450   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:20.896615   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:20.896854   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:20.897092   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:28:20.897267   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0814 16:28:20.897283   31878 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-597780-m03 && echo "ha-597780-m03" | sudo tee /etc/hostname
	I0814 16:28:21.020119   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-597780-m03
	
	I0814 16:28:21.020147   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.022784   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.023132   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.023152   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.023349   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.023553   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.023733   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.023897   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.024059   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:28:21.024253   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0814 16:28:21.024277   31878 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-597780-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-597780-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-597780-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 16:28:21.143314   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:28:21.143359   31878 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 16:28:21.143375   31878 buildroot.go:174] setting up certificates
	I0814 16:28:21.143389   31878 provision.go:84] configureAuth start
	I0814 16:28:21.143413   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetMachineName
	I0814 16:28:21.143713   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:28:21.146530   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.146932   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.146971   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.147100   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.149060   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.149339   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.149369   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.149498   31878 provision.go:143] copyHostCerts
	I0814 16:28:21.149522   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:28:21.149556   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 16:28:21.149568   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:28:21.149667   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 16:28:21.149760   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:28:21.149788   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 16:28:21.149799   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:28:21.149836   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 16:28:21.149897   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:28:21.149921   31878 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 16:28:21.149929   31878 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:28:21.149964   31878 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 16:28:21.150287   31878 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.ha-597780-m03 san=[127.0.0.1 192.168.39.167 ha-597780-m03 localhost minikube]
	I0814 16:28:21.257447   31878 provision.go:177] copyRemoteCerts
	I0814 16:28:21.257509   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 16:28:21.257542   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.260087   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.260489   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.260516   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.260686   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.260849   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.261017   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.261147   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:28:21.345036   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 16:28:21.345125   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 16:28:21.366773   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 16:28:21.366842   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0814 16:28:21.388396   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 16:28:21.388484   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 16:28:21.409418   31878 provision.go:87] duration metric: took 266.016615ms to configureAuth
	I0814 16:28:21.409449   31878 buildroot.go:189] setting minikube options for container-runtime
	I0814 16:28:21.409684   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:28:21.409765   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.412416   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.412835   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.412861   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.413061   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.413256   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.413408   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.413525   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.413697   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:28:21.413877   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0814 16:28:21.413892   31878 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 16:28:21.677901   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 16:28:21.677938   31878 main.go:141] libmachine: Checking connection to Docker...
	I0814 16:28:21.677946   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetURL
	I0814 16:28:21.679192   31878 main.go:141] libmachine: (ha-597780-m03) DBG | Using libvirt version 6000000
	I0814 16:28:21.681181   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.681521   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.681543   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.681658   31878 main.go:141] libmachine: Docker is up and running!
	I0814 16:28:21.681672   31878 main.go:141] libmachine: Reticulating splines...
	I0814 16:28:21.681680   31878 client.go:171] duration metric: took 23.040737276s to LocalClient.Create
	I0814 16:28:21.681707   31878 start.go:167] duration metric: took 23.040797467s to libmachine.API.Create "ha-597780"
	I0814 16:28:21.681718   31878 start.go:293] postStartSetup for "ha-597780-m03" (driver="kvm2")
	I0814 16:28:21.681731   31878 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 16:28:21.681761   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:21.681979   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 16:28:21.682003   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.684060   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.684330   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.684354   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.684492   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.684684   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.684817   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.684951   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:28:21.773408   31878 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 16:28:21.777349   31878 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 16:28:21.777370   31878 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 16:28:21.777444   31878 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 16:28:21.777537   31878 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 16:28:21.777548   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /etc/ssl/certs/211772.pem
	I0814 16:28:21.777653   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 16:28:21.786579   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:28:21.808597   31878 start.go:296] duration metric: took 126.866868ms for postStartSetup
	I0814 16:28:21.808644   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetConfigRaw
	I0814 16:28:21.809206   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:28:21.811918   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.812306   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.812335   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.812655   31878 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:28:21.812852   31878 start.go:128] duration metric: took 23.190411902s to createHost
	I0814 16:28:21.812871   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.815277   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.815654   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.815674   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.815874   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.816060   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.816196   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.816308   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.816442   31878 main.go:141] libmachine: Using SSH client type: native
	I0814 16:28:21.816653   31878 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0814 16:28:21.816667   31878 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 16:28:21.931715   31878 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723652901.892288502
	
	I0814 16:28:21.931737   31878 fix.go:216] guest clock: 1723652901.892288502
	I0814 16:28:21.931744   31878 fix.go:229] Guest: 2024-08-14 16:28:21.892288502 +0000 UTC Remote: 2024-08-14 16:28:21.812861976 +0000 UTC m=+185.295146227 (delta=79.426526ms)
	I0814 16:28:21.931758   31878 fix.go:200] guest clock delta is within tolerance: 79.426526ms
	I0814 16:28:21.931763   31878 start.go:83] releasing machines lock for "ha-597780-m03", held for 23.309428864s
	I0814 16:28:21.931778   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:21.932009   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:28:21.934743   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.935285   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.935353   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.937363   31878 out.go:177] * Found network options:
	I0814 16:28:21.938795   31878 out.go:177]   - NO_PROXY=192.168.39.4,192.168.39.225
	W0814 16:28:21.939945   31878 proxy.go:119] fail to check proxy env: Error ip not in block
	W0814 16:28:21.939967   31878 proxy.go:119] fail to check proxy env: Error ip not in block
	I0814 16:28:21.939980   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:21.940538   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:21.940699   31878 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:28:21.940787   31878 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 16:28:21.940823   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	W0814 16:28:21.940913   31878 proxy.go:119] fail to check proxy env: Error ip not in block
	W0814 16:28:21.940936   31878 proxy.go:119] fail to check proxy env: Error ip not in block
	I0814 16:28:21.941003   31878 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 16:28:21.941025   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:28:21.943600   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.943862   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.944046   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.944071   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.944194   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.944314   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:21.944334   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:21.944368   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.944506   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:28:21.944549   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.944706   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:28:21.944713   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:28:21.944871   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:28:21.945030   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:28:22.182608   31878 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 16:28:22.188514   31878 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 16:28:22.188591   31878 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:28:22.204201   31878 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 16:28:22.204225   31878 start.go:495] detecting cgroup driver to use...
	I0814 16:28:22.204293   31878 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 16:28:22.221315   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 16:28:22.237458   31878 docker.go:217] disabling cri-docker service (if available) ...
	I0814 16:28:22.237520   31878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 16:28:22.251459   31878 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 16:28:22.264746   31878 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 16:28:22.381397   31878 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 16:28:22.531017   31878 docker.go:233] disabling docker service ...
	I0814 16:28:22.531088   31878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 16:28:22.544585   31878 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 16:28:22.558165   31878 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 16:28:22.696824   31878 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 16:28:22.807601   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 16:28:22.821653   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 16:28:22.839262   31878 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 16:28:22.839342   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.850133   31878 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 16:28:22.850191   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.859788   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.869995   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.879459   31878 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 16:28:22.889428   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.899777   31878 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.917167   31878 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:28:22.927123   31878 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 16:28:22.936357   31878 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 16:28:22.936408   31878 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 16:28:22.950536   31878 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 16:28:22.959627   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:28:23.072935   31878 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 16:28:23.207339   31878 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 16:28:23.207426   31878 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 16:28:23.211816   31878 start.go:563] Will wait 60s for crictl version
	I0814 16:28:23.211878   31878 ssh_runner.go:195] Run: which crictl
	I0814 16:28:23.215943   31878 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 16:28:23.254626   31878 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 16:28:23.254707   31878 ssh_runner.go:195] Run: crio --version
	I0814 16:28:23.284346   31878 ssh_runner.go:195] Run: crio --version
	I0814 16:28:23.312383   31878 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 16:28:23.313724   31878 out.go:177]   - env NO_PROXY=192.168.39.4
	I0814 16:28:23.315140   31878 out.go:177]   - env NO_PROXY=192.168.39.4,192.168.39.225
	I0814 16:28:23.316419   31878 main.go:141] libmachine: (ha-597780-m03) Calling .GetIP
	I0814 16:28:23.319204   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:23.319704   31878 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:28:23.319731   31878 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:28:23.319984   31878 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 16:28:23.323956   31878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:28:23.336792   31878 mustload.go:65] Loading cluster: ha-597780
	I0814 16:28:23.337035   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:28:23.337414   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:28:23.337458   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:28:23.352506   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
	I0814 16:28:23.353465   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:28:23.353923   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:28:23.353941   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:28:23.354257   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:28:23.354437   31878 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:28:23.356036   31878 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:28:23.356313   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:28:23.356344   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:28:23.370230   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32981
	I0814 16:28:23.370708   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:28:23.371061   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:28:23.371081   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:28:23.371375   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:28:23.371534   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:28:23.371698   31878 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780 for IP: 192.168.39.167
	I0814 16:28:23.371709   31878 certs.go:194] generating shared ca certs ...
	I0814 16:28:23.371721   31878 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:28:23.371843   31878 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 16:28:23.371899   31878 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 16:28:23.371909   31878 certs.go:256] generating profile certs ...
	I0814 16:28:23.371980   31878 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key
	I0814 16:28:23.372005   31878 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.c004033e
	I0814 16:28:23.372018   31878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.c004033e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.225 192.168.39.167 192.168.39.254]
	I0814 16:28:23.531346   31878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.c004033e ...
	I0814 16:28:23.531375   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.c004033e: {Name:mkf610138317689d6471fb37acfe2a421465e4a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:28:23.531526   31878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.c004033e ...
	I0814 16:28:23.531538   31878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.c004033e: {Name:mka58bc6a325725646d19898fe4916d2053e8c88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:28:23.531604   31878 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.c004033e -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt
	I0814 16:28:23.531741   31878 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.c004033e -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key
	I0814 16:28:23.531858   31878 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key
	I0814 16:28:23.531872   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 16:28:23.531884   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 16:28:23.531898   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 16:28:23.531912   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 16:28:23.531924   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 16:28:23.531936   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 16:28:23.531947   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 16:28:23.531960   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 16:28:23.532007   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 16:28:23.532033   31878 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 16:28:23.532041   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 16:28:23.532062   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 16:28:23.532082   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 16:28:23.532101   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 16:28:23.532136   31878 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:28:23.532160   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:28:23.532173   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem -> /usr/share/ca-certificates/21177.pem
	I0814 16:28:23.532185   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /usr/share/ca-certificates/211772.pem
	I0814 16:28:23.532215   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:28:23.534797   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:28:23.535214   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:28:23.535240   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:28:23.535459   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:28:23.535634   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:28:23.535781   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:28:23.535876   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:28:23.607759   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0814 16:28:23.613426   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0814 16:28:23.624323   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0814 16:28:23.627973   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0814 16:28:23.637728   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0814 16:28:23.642181   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0814 16:28:23.652002   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0814 16:28:23.655744   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0814 16:28:23.665853   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0814 16:28:23.669519   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0814 16:28:23.679142   31878 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0814 16:28:23.682748   31878 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0814 16:28:23.692077   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 16:28:23.715909   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 16:28:23.738090   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 16:28:23.762049   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 16:28:23.784577   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0814 16:28:23.806319   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 16:28:23.829985   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 16:28:23.853752   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 16:28:23.876043   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 16:28:23.899547   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 16:28:23.922868   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 16:28:23.946022   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0814 16:28:23.960834   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0814 16:28:23.976274   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0814 16:28:23.991130   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0814 16:28:24.006609   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0814 16:28:24.021348   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0814 16:28:24.036293   31878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0814 16:28:24.051655   31878 ssh_runner.go:195] Run: openssl version
	I0814 16:28:24.056975   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 16:28:24.067045   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 16:28:24.071023   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 16:28:24.071070   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 16:28:24.076626   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 16:28:24.086718   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 16:28:24.096876   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:28:24.102009   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:28:24.102074   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:28:24.107679   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 16:28:24.118007   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 16:28:24.128176   31878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 16:28:24.132159   31878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 16:28:24.132226   31878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 16:28:24.137618   31878 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 16:28:24.148077   31878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 16:28:24.151750   31878 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 16:28:24.151810   31878 kubeadm.go:934] updating node {m03 192.168.39.167 8443 v1.31.0 crio true true} ...
	I0814 16:28:24.151902   31878 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-597780-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 16:28:24.151936   31878 kube-vip.go:115] generating kube-vip config ...
	I0814 16:28:24.151978   31878 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0814 16:28:24.168492   31878 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0814 16:28:24.168553   31878 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0814 16:28:24.168622   31878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 16:28:24.177752   31878 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0814 16:28:24.177817   31878 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0814 16:28:24.186720   31878 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0814 16:28:24.186743   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0814 16:28:24.186752   31878 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0814 16:28:24.186771   31878 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0814 16:28:24.186789   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0814 16:28:24.186800   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:28:24.186819   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0814 16:28:24.186849   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0814 16:28:24.203631   31878 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0814 16:28:24.203708   31878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0814 16:28:24.203725   31878 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0814 16:28:24.203732   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0814 16:28:24.203780   31878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0814 16:28:24.203810   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0814 16:28:24.212736   31878 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0814 16:28:24.212771   31878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0814 16:28:25.014583   31878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0814 16:28:25.025211   31878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0814 16:28:25.042467   31878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 16:28:25.059345   31878 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0814 16:28:25.074711   31878 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0814 16:28:25.078397   31878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 16:28:25.090138   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:28:25.212030   31878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:28:25.232302   31878 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:28:25.232784   31878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:28:25.232837   31878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:28:25.250540   31878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46765
	I0814 16:28:25.251572   31878 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:28:25.252132   31878 main.go:141] libmachine: Using API Version  1
	I0814 16:28:25.252153   31878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:28:25.252499   31878 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:28:25.252708   31878 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:28:25.252852   31878 start.go:317] joinCluster: &{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:28:25.253023   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0814 16:28:25.253044   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:28:25.256193   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:28:25.256616   31878 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:28:25.256642   31878 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:28:25.256850   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:28:25.257048   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:28:25.257195   31878 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:28:25.257339   31878 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:28:25.405413   31878 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:28:25.405462   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qz6at2.4om312wgxwib85w4 --discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-597780-m03 --control-plane --apiserver-advertise-address=192.168.39.167 --apiserver-bind-port=8443"
	I0814 16:28:48.475052   31878 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qz6at2.4om312wgxwib85w4 --discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-597780-m03 --control-plane --apiserver-advertise-address=192.168.39.167 --apiserver-bind-port=8443": (23.069546291s)
	I0814 16:28:48.475092   31878 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0814 16:28:49.048015   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-597780-m03 minikube.k8s.io/updated_at=2024_08_14T16_28_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=ha-597780 minikube.k8s.io/primary=false
	I0814 16:28:49.172851   31878 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-597780-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0814 16:28:49.287202   31878 start.go:319] duration metric: took 24.034345482s to joinCluster
	I0814 16:28:49.287280   31878 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 16:28:49.287645   31878 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:28:49.288915   31878 out.go:177] * Verifying Kubernetes components...
	I0814 16:28:49.290054   31878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:28:49.507735   31878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:28:49.566643   31878 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:28:49.566988   31878 kapi.go:59] client config for ha-597780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key", CAFile:"/home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f170c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0814 16:28:49.567088   31878 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.4:8443
	I0814 16:28:49.567378   31878 node_ready.go:35] waiting up to 6m0s for node "ha-597780-m03" to be "Ready" ...
	I0814 16:28:49.567483   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:49.567496   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:49.567508   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:49.567514   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:49.570928   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:50.068602   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:50.068628   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:50.068641   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:50.068679   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:50.072059   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:50.568564   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:50.568592   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:50.568601   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:50.568606   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:50.572723   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:28:51.067584   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:51.067611   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:51.067631   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:51.067638   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:51.071023   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:51.568228   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:51.568250   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:51.568261   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:51.568266   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:51.571548   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:51.572032   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:28:52.067883   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:52.067905   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:52.067915   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:52.067920   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:52.070772   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:28:52.568598   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:52.568626   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:52.568637   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:52.568644   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:52.572038   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:53.067960   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:53.067986   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:53.067996   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:53.068001   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:53.071266   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:53.568203   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:53.568225   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:53.568233   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:53.568239   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:53.571473   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:53.572184   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:28:54.068455   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:54.068475   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:54.068488   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:54.068491   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:54.071339   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:28:54.567828   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:54.567856   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:54.567866   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:54.567874   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:54.574563   31878 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0814 16:28:55.068622   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:55.068647   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:55.068658   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:55.068663   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:55.071914   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:55.568556   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:55.568576   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:55.568582   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:55.568587   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:55.571804   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:55.572292   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:28:56.068546   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:56.068568   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:56.068578   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:56.068583   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:56.072131   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:56.568251   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:56.568281   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:56.568291   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:56.568298   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:56.571731   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:57.068349   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:57.068372   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:57.068379   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:57.068385   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:57.070869   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:28:57.568564   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:57.568584   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:57.568592   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:57.568598   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:57.571770   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:57.572608   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:28:58.068098   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:58.068123   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:58.068131   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:58.068136   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:58.071240   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:58.568369   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:58.568397   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:58.568407   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:58.568412   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:58.571837   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:28:59.068558   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:59.068593   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:59.068602   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:59.068608   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:59.071513   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:28:59.568430   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:28:59.568470   31878 round_trippers.go:469] Request Headers:
	I0814 16:28:59.568477   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:28:59.568481   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:28:59.571759   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:00.068605   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:00.068627   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:00.068639   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:00.068647   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:00.072033   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:00.072740   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:29:00.568425   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:00.568450   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:00.568458   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:00.568465   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:00.572298   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:01.068204   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:01.068238   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:01.068250   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:01.068258   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:01.071244   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:01.568161   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:01.568188   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:01.568199   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:01.568205   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:01.571637   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:02.068242   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:02.068267   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:02.068276   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:02.068282   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:02.071343   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:02.568379   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:02.568403   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:02.568412   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:02.568417   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:02.571355   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:02.571886   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:29:03.067646   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:03.067667   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:03.067674   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:03.067679   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:03.070708   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:03.568581   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:03.568608   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:03.568619   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:03.568626   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:03.571505   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:04.067593   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:04.067634   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:04.067652   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:04.067656   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:04.070803   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:04.568233   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:04.568268   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:04.568278   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:04.568306   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:04.573076   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:29:04.573726   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:29:05.068223   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:05.068249   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:05.068263   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:05.068271   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:05.070929   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:05.568472   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:05.568504   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:05.568517   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:05.568524   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:05.571869   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:06.068545   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:06.068566   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:06.068574   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:06.068577   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:06.071859   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:06.568006   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:06.568033   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:06.568045   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:06.568051   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:06.571453   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:07.067849   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:07.067924   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.067950   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.067964   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.071472   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:07.072434   31878 node_ready.go:53] node "ha-597780-m03" has status "Ready":"False"
	I0814 16:29:07.568575   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:07.568599   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.568608   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.568614   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.571596   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.572338   31878 node_ready.go:49] node "ha-597780-m03" has status "Ready":"True"
	I0814 16:29:07.572356   31878 node_ready.go:38] duration metric: took 18.004962293s for node "ha-597780-m03" to be "Ready" ...
	I0814 16:29:07.572364   31878 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:29:07.572424   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:29:07.572433   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.572440   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.572444   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.577495   31878 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0814 16:29:07.585157   31878 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-28k2m" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.585268   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-28k2m
	I0814 16:29:07.585286   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.585296   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.585303   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.588480   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:07.589251   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:07.589270   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.589281   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.589288   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.592447   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:07.593017   31878 pod_ready.go:92] pod "coredns-6f6b679f8f-28k2m" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:07.593040   31878 pod_ready.go:81] duration metric: took 7.850765ms for pod "coredns-6f6b679f8f-28k2m" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.593053   31878 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-kc84b" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.593142   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-kc84b
	I0814 16:29:07.593152   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.593162   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.593168   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.596200   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:07.596895   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:07.596909   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.596916   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.596921   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.599174   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.599650   31878 pod_ready.go:92] pod "coredns-6f6b679f8f-kc84b" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:07.599670   31878 pod_ready.go:81] duration metric: took 6.609573ms for pod "coredns-6f6b679f8f-kc84b" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.599682   31878 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.599747   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-597780
	I0814 16:29:07.599757   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.599767   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.599774   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.602031   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.602550   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:07.602566   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.602576   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.602582   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.605537   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.606107   31878 pod_ready.go:92] pod "etcd-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:07.606124   31878 pod_ready.go:81] duration metric: took 6.434528ms for pod "etcd-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.606132   31878 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.606177   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-597780-m02
	I0814 16:29:07.606184   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.606191   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.606197   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.608992   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.609493   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:07.609506   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.609513   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.609517   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.612196   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.612719   31878 pod_ready.go:92] pod "etcd-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:07.612739   31878 pod_ready.go:81] duration metric: took 6.600607ms for pod "etcd-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.612751   31878 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.769170   31878 request.go:632] Waited for 156.349582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-597780-m03
	I0814 16:29:07.769255   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/etcd-ha-597780-m03
	I0814 16:29:07.769265   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.769276   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.769286   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.772462   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:07.969360   31878 request.go:632] Waited for 196.218172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:07.969411   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:07.969416   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:07.969423   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:07.969428   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:07.972339   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:07.972901   31878 pod_ready.go:92] pod "etcd-ha-597780-m03" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:07.972922   31878 pod_ready.go:81] duration metric: took 360.158993ms for pod "etcd-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:07.972943   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:08.169015   31878 request.go:632] Waited for 196.006672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780
	I0814 16:29:08.169109   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780
	I0814 16:29:08.169117   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:08.169128   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:08.169138   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:08.172166   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:08.369122   31878 request.go:632] Waited for 196.24583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:08.369190   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:08.369197   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:08.369207   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:08.369213   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:08.372255   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:08.372960   31878 pod_ready.go:92] pod "kube-apiserver-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:08.372977   31878 pod_ready.go:81] duration metric: took 400.026545ms for pod "kube-apiserver-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:08.372986   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:08.569453   31878 request.go:632] Waited for 196.397043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780-m02
	I0814 16:29:08.569511   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780-m02
	I0814 16:29:08.569516   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:08.569524   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:08.569528   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:08.572332   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:08.768650   31878 request.go:632] Waited for 195.20197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:08.768709   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:08.768716   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:08.768727   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:08.768737   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:08.771774   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:08.772261   31878 pod_ready.go:92] pod "kube-apiserver-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:08.772278   31878 pod_ready.go:81] duration metric: took 399.284844ms for pod "kube-apiserver-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:08.772288   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:08.968742   31878 request.go:632] Waited for 196.381006ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780-m03
	I0814 16:29:08.968841   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-597780-m03
	I0814 16:29:08.968852   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:08.968864   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:08.968875   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:08.972046   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:09.169224   31878 request.go:632] Waited for 196.392353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:09.169290   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:09.169297   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:09.169307   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:09.169344   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:09.172100   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:09.172767   31878 pod_ready.go:92] pod "kube-apiserver-ha-597780-m03" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:09.172785   31878 pod_ready.go:81] duration metric: took 400.49136ms for pod "kube-apiserver-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:09.172797   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:09.368840   31878 request.go:632] Waited for 195.910201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780
	I0814 16:29:09.368910   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780
	I0814 16:29:09.368918   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:09.368928   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:09.368939   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:09.372517   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:09.568926   31878 request.go:632] Waited for 195.394269ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:09.569000   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:09.569009   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:09.569018   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:09.569024   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:09.572245   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:09.572668   31878 pod_ready.go:92] pod "kube-controller-manager-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:09.572685   31878 pod_ready.go:81] duration metric: took 399.881647ms for pod "kube-controller-manager-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:09.572694   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:09.768898   31878 request.go:632] Waited for 196.11828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780-m02
	I0814 16:29:09.768960   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780-m02
	I0814 16:29:09.768968   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:09.768978   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:09.768988   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:09.773594   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:29:09.968689   31878 request.go:632] Waited for 194.254671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:09.968758   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:09.968774   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:09.968785   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:09.968793   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:09.971724   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:09.972375   31878 pod_ready.go:92] pod "kube-controller-manager-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:09.972394   31878 pod_ready.go:81] duration metric: took 399.693107ms for pod "kube-controller-manager-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:09.972404   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:10.169550   31878 request.go:632] Waited for 197.077109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780-m03
	I0814 16:29:10.169646   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-597780-m03
	I0814 16:29:10.169657   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:10.169669   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:10.169677   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:10.172716   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:10.368844   31878 request.go:632] Waited for 195.315402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:10.368949   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:10.368963   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:10.368972   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:10.368977   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:10.372288   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:10.372837   31878 pod_ready.go:92] pod "kube-controller-manager-ha-597780-m03" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:10.372858   31878 pod_ready.go:81] duration metric: took 400.448188ms for pod "kube-controller-manager-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:10.372870   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4q2dq" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:10.569017   31878 request.go:632] Waited for 196.052872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q2dq
	I0814 16:29:10.569075   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4q2dq
	I0814 16:29:10.569081   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:10.569090   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:10.569099   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:10.572129   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:10.769223   31878 request.go:632] Waited for 196.399201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:10.769288   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:10.769296   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:10.769306   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:10.769311   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:10.772503   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:10.773054   31878 pod_ready.go:92] pod "kube-proxy-4q2dq" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:10.773075   31878 pod_ready.go:81] duration metric: took 400.188151ms for pod "kube-proxy-4q2dq" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:10.773088   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-79txl" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:10.969067   31878 request.go:632] Waited for 195.902033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-79txl
	I0814 16:29:10.969119   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-79txl
	I0814 16:29:10.969124   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:10.969131   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:10.969136   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:10.972148   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:11.169215   31878 request.go:632] Waited for 196.37647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:11.169306   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:11.169317   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:11.169328   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:11.169338   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:11.172144   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:11.172632   31878 pod_ready.go:92] pod "kube-proxy-79txl" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:11.172650   31878 pod_ready.go:81] duration metric: took 399.555003ms for pod "kube-proxy-79txl" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:11.172662   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97tjj" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:11.369808   31878 request.go:632] Waited for 196.984895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97tjj
	I0814 16:29:11.369925   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-proxy-97tjj
	I0814 16:29:11.369932   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:11.369939   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:11.369947   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:11.373101   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:11.568936   31878 request.go:632] Waited for 195.065778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:11.569027   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:11.569042   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:11.569052   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:11.569058   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:11.571899   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:11.572502   31878 pod_ready.go:92] pod "kube-proxy-97tjj" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:11.572526   31878 pod_ready.go:81] duration metric: took 399.85308ms for pod "kube-proxy-97tjj" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:11.572540   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:11.769425   31878 request.go:632] Waited for 196.784299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780
	I0814 16:29:11.769485   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780
	I0814 16:29:11.769493   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:11.769502   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:11.769512   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:11.772657   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:11.969505   31878 request.go:632] Waited for 196.222574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:11.969586   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780
	I0814 16:29:11.969597   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:11.969607   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:11.969628   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:11.972738   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:11.973379   31878 pod_ready.go:92] pod "kube-scheduler-ha-597780" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:11.973403   31878 pod_ready.go:81] duration metric: took 400.847019ms for pod "kube-scheduler-ha-597780" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:11.973413   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:12.169525   31878 request.go:632] Waited for 196.045447ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780-m02
	I0814 16:29:12.169619   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780-m02
	I0814 16:29:12.169630   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:12.169640   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:12.169648   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:12.172903   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:12.368792   31878 request.go:632] Waited for 195.312013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:12.368860   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m02
	I0814 16:29:12.368867   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:12.368877   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:12.368882   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:12.371851   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:12.372325   31878 pod_ready.go:92] pod "kube-scheduler-ha-597780-m02" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:12.372343   31878 pod_ready.go:81] duration metric: took 398.923788ms for pod "kube-scheduler-ha-597780-m02" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:12.372352   31878 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:12.569518   31878 request.go:632] Waited for 197.106752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780-m03
	I0814 16:29:12.569587   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-597780-m03
	I0814 16:29:12.569593   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:12.569601   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:12.569605   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:12.572556   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:12.768667   31878 request.go:632] Waited for 195.348501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:12.768748   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes/ha-597780-m03
	I0814 16:29:12.768763   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:12.768791   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:12.768797   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:12.771628   31878 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0814 16:29:12.772128   31878 pod_ready.go:92] pod "kube-scheduler-ha-597780-m03" in "kube-system" namespace has status "Ready":"True"
	I0814 16:29:12.772146   31878 pod_ready.go:81] duration metric: took 399.787744ms for pod "kube-scheduler-ha-597780-m03" in "kube-system" namespace to be "Ready" ...
	I0814 16:29:12.772156   31878 pod_ready.go:38] duration metric: took 5.199783055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 16:29:12.772190   31878 api_server.go:52] waiting for apiserver process to appear ...
	I0814 16:29:12.772270   31878 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:29:12.789000   31878 api_server.go:72] duration metric: took 23.501684528s to wait for apiserver process to appear ...
	I0814 16:29:12.789024   31878 api_server.go:88] waiting for apiserver healthz status ...
	I0814 16:29:12.789045   31878 api_server.go:253] Checking apiserver healthz at https://192.168.39.4:8443/healthz ...
	I0814 16:29:12.793311   31878 api_server.go:279] https://192.168.39.4:8443/healthz returned 200:
	ok
	I0814 16:29:12.793380   31878 round_trippers.go:463] GET https://192.168.39.4:8443/version
	I0814 16:29:12.793386   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:12.793393   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:12.793399   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:12.794223   31878 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0814 16:29:12.794281   31878 api_server.go:141] control plane version: v1.31.0
	I0814 16:29:12.794293   31878 api_server.go:131] duration metric: took 5.262979ms to wait for apiserver health ...
	I0814 16:29:12.794303   31878 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 16:29:12.969628   31878 request.go:632] Waited for 175.246778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:29:12.969724   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:29:12.969735   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:12.969742   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:12.969746   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:12.975221   31878 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0814 16:29:12.981537   31878 system_pods.go:59] 24 kube-system pods found
	I0814 16:29:12.981574   31878 system_pods.go:61] "coredns-6f6b679f8f-28k2m" [ec3725c1-3e21-49b0-9caf-922ef1928ed8] Running
	I0814 16:29:12.981582   31878 system_pods.go:61] "coredns-6f6b679f8f-kc84b" [3a483f17-cab5-4090-abc6-808d84397a8a] Running
	I0814 16:29:12.981587   31878 system_pods.go:61] "etcd-ha-597780" [9af2f660-01fe-499f-902e-4988a5527c5a] Running
	I0814 16:29:12.981596   31878 system_pods.go:61] "etcd-ha-597780-m02" [c811879c-cf46-4c5b-aec2-6fa9aae64d13] Running
	I0814 16:29:12.981600   31878 system_pods.go:61] "etcd-ha-597780-m03" [7970e939-1b0d-4a5c-9d60-8cee7ac3cd63] Running
	I0814 16:29:12.981605   31878 system_pods.go:61] "kindnet-2p7zj" [c62a2c70-6ef9-44cb-9a04-9a519f8be934] Running
	I0814 16:29:12.981611   31878 system_pods.go:61] "kindnet-c8f8r" [b053dfba-820a-416f-9233-ececd7159e1e] Running
	I0814 16:29:12.981616   31878 system_pods.go:61] "kindnet-zm75h" [1e5eabaf-5973-4658-b12b-f7faf67b8af7] Running
	I0814 16:29:12.981621   31878 system_pods.go:61] "kube-apiserver-ha-597780" [8efb614b-9a4f-4029-aba3-e2183fb20627] Running
	I0814 16:29:12.981626   31878 system_pods.go:61] "kube-apiserver-ha-597780-m02" [26d7d4c8-6f40-4217-bf24-f9f94c9f8a79] Running
	I0814 16:29:12.981633   31878 system_pods.go:61] "kube-apiserver-ha-597780-m03" [dcfc0768-d66a-41fe-9dd5-44a7bd3de490] Running
	I0814 16:29:12.981642   31878 system_pods.go:61] "kube-controller-manager-ha-597780" [ad59b322-ee34-4041-af68-8b5ffcdff9dd] Running
	I0814 16:29:12.981648   31878 system_pods.go:61] "kube-controller-manager-ha-597780-m02" [a25ce1a0-cedb-40cd-ade3-ba63a4b69cd4] Running
	I0814 16:29:12.981656   31878 system_pods.go:61] "kube-controller-manager-ha-597780-m03" [79f9e4bd-bd33-424a-be78-d5175c11592e] Running
	I0814 16:29:12.981662   31878 system_pods.go:61] "kube-proxy-4q2dq" [9e95547c-001c-4942-b160-33e37a389820] Running
	I0814 16:29:12.981667   31878 system_pods.go:61] "kube-proxy-79txl" [ea48ab09-60d5-4133-accc-f3fd69a50c5d] Running
	I0814 16:29:12.981673   31878 system_pods.go:61] "kube-proxy-97tjj" [8de24848-3fe3-4be5-b78f-169457f28da3] Running
	I0814 16:29:12.981678   31878 system_pods.go:61] "kube-scheduler-ha-597780" [c1576ee1-5aed-4177-b37e-76786ceee1a1] Running
	I0814 16:29:12.981684   31878 system_pods.go:61] "kube-scheduler-ha-597780-m02" [cb250902-8200-423a-8bd3-463aebd7379c] Running
	I0814 16:29:12.981691   31878 system_pods.go:61] "kube-scheduler-ha-597780-m03" [42853b7f-be1d-4252-b062-3ef76e17b1c4] Running
	I0814 16:29:12.981697   31878 system_pods.go:61] "kube-vip-ha-597780" [a5738727-b1a0-4750-9e02-784278225ee4] Running
	I0814 16:29:12.981702   31878 system_pods.go:61] "kube-vip-ha-597780-m02" [c2f92dd8-8248-44a7-bc10-a91546e50eb9] Running
	I0814 16:29:12.981708   31878 system_pods.go:61] "kube-vip-ha-597780-m03" [37835783-8797-41c9-8141-3b54f9bf0642] Running
	I0814 16:29:12.981715   31878 system_pods.go:61] "storage-provisioner" [9939439d-cddd-4505-b554-b72f749269fd] Running
	I0814 16:29:12.981724   31878 system_pods.go:74] duration metric: took 187.414897ms to wait for pod list to return data ...
	I0814 16:29:12.981739   31878 default_sa.go:34] waiting for default service account to be created ...
	I0814 16:29:13.169121   31878 request.go:632] Waited for 187.288377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0814 16:29:13.169184   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/default/serviceaccounts
	I0814 16:29:13.169189   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:13.169196   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:13.169200   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:13.172851   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:13.172967   31878 default_sa.go:45] found service account: "default"
	I0814 16:29:13.172983   31878 default_sa.go:55] duration metric: took 191.237857ms for default service account to be created ...
	I0814 16:29:13.172991   31878 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 16:29:13.369473   31878 request.go:632] Waited for 196.411676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:29:13.369524   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/namespaces/kube-system/pods
	I0814 16:29:13.369529   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:13.369537   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:13.369544   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:13.374488   31878 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0814 16:29:13.382315   31878 system_pods.go:86] 24 kube-system pods found
	I0814 16:29:13.382348   31878 system_pods.go:89] "coredns-6f6b679f8f-28k2m" [ec3725c1-3e21-49b0-9caf-922ef1928ed8] Running
	I0814 16:29:13.382354   31878 system_pods.go:89] "coredns-6f6b679f8f-kc84b" [3a483f17-cab5-4090-abc6-808d84397a8a] Running
	I0814 16:29:13.382358   31878 system_pods.go:89] "etcd-ha-597780" [9af2f660-01fe-499f-902e-4988a5527c5a] Running
	I0814 16:29:13.382363   31878 system_pods.go:89] "etcd-ha-597780-m02" [c811879c-cf46-4c5b-aec2-6fa9aae64d13] Running
	I0814 16:29:13.382367   31878 system_pods.go:89] "etcd-ha-597780-m03" [7970e939-1b0d-4a5c-9d60-8cee7ac3cd63] Running
	I0814 16:29:13.382371   31878 system_pods.go:89] "kindnet-2p7zj" [c62a2c70-6ef9-44cb-9a04-9a519f8be934] Running
	I0814 16:29:13.382376   31878 system_pods.go:89] "kindnet-c8f8r" [b053dfba-820a-416f-9233-ececd7159e1e] Running
	I0814 16:29:13.382380   31878 system_pods.go:89] "kindnet-zm75h" [1e5eabaf-5973-4658-b12b-f7faf67b8af7] Running
	I0814 16:29:13.382384   31878 system_pods.go:89] "kube-apiserver-ha-597780" [8efb614b-9a4f-4029-aba3-e2183fb20627] Running
	I0814 16:29:13.382388   31878 system_pods.go:89] "kube-apiserver-ha-597780-m02" [26d7d4c8-6f40-4217-bf24-f9f94c9f8a79] Running
	I0814 16:29:13.382393   31878 system_pods.go:89] "kube-apiserver-ha-597780-m03" [dcfc0768-d66a-41fe-9dd5-44a7bd3de490] Running
	I0814 16:29:13.382400   31878 system_pods.go:89] "kube-controller-manager-ha-597780" [ad59b322-ee34-4041-af68-8b5ffcdff9dd] Running
	I0814 16:29:13.382405   31878 system_pods.go:89] "kube-controller-manager-ha-597780-m02" [a25ce1a0-cedb-40cd-ade3-ba63a4b69cd4] Running
	I0814 16:29:13.382410   31878 system_pods.go:89] "kube-controller-manager-ha-597780-m03" [79f9e4bd-bd33-424a-be78-d5175c11592e] Running
	I0814 16:29:13.382414   31878 system_pods.go:89] "kube-proxy-4q2dq" [9e95547c-001c-4942-b160-33e37a389820] Running
	I0814 16:29:13.382419   31878 system_pods.go:89] "kube-proxy-79txl" [ea48ab09-60d5-4133-accc-f3fd69a50c5d] Running
	I0814 16:29:13.382423   31878 system_pods.go:89] "kube-proxy-97tjj" [8de24848-3fe3-4be5-b78f-169457f28da3] Running
	I0814 16:29:13.382429   31878 system_pods.go:89] "kube-scheduler-ha-597780" [c1576ee1-5aed-4177-b37e-76786ceee1a1] Running
	I0814 16:29:13.382432   31878 system_pods.go:89] "kube-scheduler-ha-597780-m02" [cb250902-8200-423a-8bd3-463aebd7379c] Running
	I0814 16:29:13.382439   31878 system_pods.go:89] "kube-scheduler-ha-597780-m03" [42853b7f-be1d-4252-b062-3ef76e17b1c4] Running
	I0814 16:29:13.382443   31878 system_pods.go:89] "kube-vip-ha-597780" [a5738727-b1a0-4750-9e02-784278225ee4] Running
	I0814 16:29:13.382449   31878 system_pods.go:89] "kube-vip-ha-597780-m02" [c2f92dd8-8248-44a7-bc10-a91546e50eb9] Running
	I0814 16:29:13.382453   31878 system_pods.go:89] "kube-vip-ha-597780-m03" [37835783-8797-41c9-8141-3b54f9bf0642] Running
	I0814 16:29:13.382458   31878 system_pods.go:89] "storage-provisioner" [9939439d-cddd-4505-b554-b72f749269fd] Running
	I0814 16:29:13.382464   31878 system_pods.go:126] duration metric: took 209.465171ms to wait for k8s-apps to be running ...
	I0814 16:29:13.382474   31878 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 16:29:13.382540   31878 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:29:13.397010   31878 system_svc.go:56] duration metric: took 14.527615ms WaitForService to wait for kubelet
	I0814 16:29:13.397039   31878 kubeadm.go:582] duration metric: took 24.10972781s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:29:13.397076   31878 node_conditions.go:102] verifying NodePressure condition ...
	I0814 16:29:13.569479   31878 request.go:632] Waited for 172.328639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.4:8443/api/v1/nodes
	I0814 16:29:13.569543   31878 round_trippers.go:463] GET https://192.168.39.4:8443/api/v1/nodes
	I0814 16:29:13.569548   31878 round_trippers.go:469] Request Headers:
	I0814 16:29:13.569555   31878 round_trippers.go:473]     Accept: application/json, */*
	I0814 16:29:13.569561   31878 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0814 16:29:13.572794   31878 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0814 16:29:13.574144   31878 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 16:29:13.574170   31878 node_conditions.go:123] node cpu capacity is 2
	I0814 16:29:13.574196   31878 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 16:29:13.574201   31878 node_conditions.go:123] node cpu capacity is 2
	I0814 16:29:13.574206   31878 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 16:29:13.574214   31878 node_conditions.go:123] node cpu capacity is 2
	I0814 16:29:13.574220   31878 node_conditions.go:105] duration metric: took 177.13766ms to run NodePressure ...
	I0814 16:29:13.574238   31878 start.go:241] waiting for startup goroutines ...
	I0814 16:29:13.574262   31878 start.go:255] writing updated cluster config ...
	I0814 16:29:13.574639   31878 ssh_runner.go:195] Run: rm -f paused
	I0814 16:29:13.625302   31878 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 16:29:13.626803   31878 out.go:177] * Done! kubectl is now configured to use "ha-597780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.214401859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653236214375983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46645c3b-2d92-4bf6-b0fe-7aa8de7f6991 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.215200934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8017cb24-99ab-4bb1-bfa4-fca10a6ce0e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.215323024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8017cb24-99ab-4bb1-bfa4-fca10a6ce0e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.216395322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723652958530026773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778223379570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778200551048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdde6ae1e8d74427216ede0d7dad128cd2183769f04fab964ea0060a3dd2b1ee,PodSandboxId:4c5c92213f0e6251be7e29adcda3cded019246457065d5c0b303c9d621a74ab5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723652778118596170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723652766365172600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172365276
2447290664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67f9d9915d534085918d0529b19548940cd4887f3fcff515d5c5cf62eece770,PodSandboxId:81fcaf0428bd7b15c5487925be0aaccb835f08d18cf3b4649f532fdc79b8e9e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172365275328
8661962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 498bfc5ba79cf3931c7cca41edd994ee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723652750804081450,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723652750785186368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72903e605408111be84917c525af67e79889822f24a9cf8ba1b60605ecc495fd,PodSandboxId:44348a00d6f65407f29b608c7166f2039a3b9bc56b2a09eb9ba311632aa6d825,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723652750790958720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad80a864cc602ff3ed5231f18c40e60acb39b91e37eb9ecf4ac327c268587ea,PodSandboxId:004f1d9c571dd53906206c8edf18cc3624d52580711e76f40e3a2430cee0abf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723652750648705145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8017cb24-99ab-4bb1-bfa4-fca10a6ce0e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.261896536Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea797599-6f08-4203-9e43-32c74188ac31 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.261982822Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea797599-6f08-4203-9e43-32c74188ac31 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.262977234Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93ef5239-9127-4c3b-a397-017504e932bc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.263531323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653236263503630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93ef5239-9127-4c3b-a397-017504e932bc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.264108361Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37eaf427-96c7-40aa-a676-75d5a2aca425 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.264168444Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37eaf427-96c7-40aa-a676-75d5a2aca425 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.264461537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723652958530026773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778223379570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778200551048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdde6ae1e8d74427216ede0d7dad128cd2183769f04fab964ea0060a3dd2b1ee,PodSandboxId:4c5c92213f0e6251be7e29adcda3cded019246457065d5c0b303c9d621a74ab5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723652778118596170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723652766365172600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172365276
2447290664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67f9d9915d534085918d0529b19548940cd4887f3fcff515d5c5cf62eece770,PodSandboxId:81fcaf0428bd7b15c5487925be0aaccb835f08d18cf3b4649f532fdc79b8e9e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172365275328
8661962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 498bfc5ba79cf3931c7cca41edd994ee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723652750804081450,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723652750785186368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72903e605408111be84917c525af67e79889822f24a9cf8ba1b60605ecc495fd,PodSandboxId:44348a00d6f65407f29b608c7166f2039a3b9bc56b2a09eb9ba311632aa6d825,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723652750790958720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad80a864cc602ff3ed5231f18c40e60acb39b91e37eb9ecf4ac327c268587ea,PodSandboxId:004f1d9c571dd53906206c8edf18cc3624d52580711e76f40e3a2430cee0abf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723652750648705145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37eaf427-96c7-40aa-a676-75d5a2aca425 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.299654524Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=91329c72-7040-4b82-a55c-ca2d3fc91b6d name=/runtime.v1.RuntimeService/Version
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.299747520Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=91329c72-7040-4b82-a55c-ca2d3fc91b6d name=/runtime.v1.RuntimeService/Version
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.300615387Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d8bd7ad-e1f2-4d50-b797-3ff435ad1180 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.301038662Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653236301017837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d8bd7ad-e1f2-4d50-b797-3ff435ad1180 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.301479606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ddcf820-2a77-44f8-8df4-b45b810094cf name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.301530810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ddcf820-2a77-44f8-8df4-b45b810094cf name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.301766015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723652958530026773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778223379570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778200551048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdde6ae1e8d74427216ede0d7dad128cd2183769f04fab964ea0060a3dd2b1ee,PodSandboxId:4c5c92213f0e6251be7e29adcda3cded019246457065d5c0b303c9d621a74ab5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723652778118596170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723652766365172600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172365276
2447290664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67f9d9915d534085918d0529b19548940cd4887f3fcff515d5c5cf62eece770,PodSandboxId:81fcaf0428bd7b15c5487925be0aaccb835f08d18cf3b4649f532fdc79b8e9e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172365275328
8661962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 498bfc5ba79cf3931c7cca41edd994ee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723652750804081450,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723652750785186368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72903e605408111be84917c525af67e79889822f24a9cf8ba1b60605ecc495fd,PodSandboxId:44348a00d6f65407f29b608c7166f2039a3b9bc56b2a09eb9ba311632aa6d825,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723652750790958720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad80a864cc602ff3ed5231f18c40e60acb39b91e37eb9ecf4ac327c268587ea,PodSandboxId:004f1d9c571dd53906206c8edf18cc3624d52580711e76f40e3a2430cee0abf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723652750648705145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ddcf820-2a77-44f8-8df4-b45b810094cf name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.338729508Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96f54ba1-2c61-4b75-b95b-4bf6bf3d82e9 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.338803999Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96f54ba1-2c61-4b75-b95b-4bf6bf3d82e9 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.340116584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2941d194-926c-42c4-8b28-0838a5f31679 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.340605762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653236340582691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2941d194-926c-42c4-8b28-0838a5f31679 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.341204194Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ecb9793-0eeb-4b97-af64-781cf09b4355 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.341305973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ecb9793-0eeb-4b97-af64-781cf09b4355 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:33:56 ha-597780 crio[678]: time="2024-08-14 16:33:56.341532034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723652958530026773,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778223379570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723652778200551048,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdde6ae1e8d74427216ede0d7dad128cd2183769f04fab964ea0060a3dd2b1ee,PodSandboxId:4c5c92213f0e6251be7e29adcda3cded019246457065d5c0b303c9d621a74ab5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723652778118596170,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723652766365172600,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172365276
2447290664,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f67f9d9915d534085918d0529b19548940cd4887f3fcff515d5c5cf62eece770,PodSandboxId:81fcaf0428bd7b15c5487925be0aaccb835f08d18cf3b4649f532fdc79b8e9e5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172365275328
8661962,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 498bfc5ba79cf3931c7cca41edd994ee,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723652750804081450,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723652750785186368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72903e605408111be84917c525af67e79889822f24a9cf8ba1b60605ecc495fd,PodSandboxId:44348a00d6f65407f29b608c7166f2039a3b9bc56b2a09eb9ba311632aa6d825,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723652750790958720,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubern
etes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ad80a864cc602ff3ed5231f18c40e60acb39b91e37eb9ecf4ac327c268587ea,PodSandboxId:004f1d9c571dd53906206c8edf18cc3624d52580711e76f40e3a2430cee0abf4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723652750648705145,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ecb9793-0eeb-4b97-af64-781cf09b4355 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e27a742b157d3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   24fc5367bc64f       busybox-7dff88458-rq7wd
	422bd8a4c6f73       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   103da86315438       coredns-6f6b679f8f-28k2m
	e6f5722727045       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   6b4d32c83825a       coredns-6f6b679f8f-kc84b
	fdde6ae1e8d74       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   4c5c92213f0e6       storage-provisioner
	9383508aacb47       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   7c496d8d976b0       kindnet-zm75h
	37ced76497679       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   403a7dadd2cf1       kube-proxy-79txl
	f67f9d9915d53       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   81fcaf0428bd7       kube-vip-ha-597780
	be37bacc58210       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago       Running             etcd                      0                   c3627f4eb5471       etcd-ha-597780
	72903e6054081       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago       Running             kube-controller-manager   0                   44348a00d6f65       kube-controller-manager-ha-597780
	9049789221ccd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago       Running             kube-scheduler            0                   dfba8d4d791ac       kube-scheduler-ha-597780
	4ad80a864cc60       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago       Running             kube-apiserver            0                   004f1d9c571dd       kube-apiserver-ha-597780
	
	
	==> coredns [422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c] <==
	[INFO] 10.244.2.2:35482 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159892s
	[INFO] 10.244.2.2:45275 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127461s
	[INFO] 10.244.0.4:43753 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123741s
	[INFO] 10.244.0.4:33481 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001866295s
	[INFO] 10.244.0.4:45903 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132096s
	[INFO] 10.244.0.4:38858 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001421125s
	[INFO] 10.244.1.2:43848 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094776s
	[INFO] 10.244.1.2:34489 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00124314s
	[INFO] 10.244.1.2:37019 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075532s
	[INFO] 10.244.1.2:33970 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072251s
	[INFO] 10.244.1.2:54832 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154144s
	[INFO] 10.244.2.2:44899 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157073s
	[INFO] 10.244.2.2:57059 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000141689s
	[INFO] 10.244.2.2:36168 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009915s
	[INFO] 10.244.0.4:54131 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070841s
	[INFO] 10.244.0.4:55620 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091367s
	[INFO] 10.244.0.4:43235 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075669s
	[INFO] 10.244.1.2:41689 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119685s
	[INFO] 10.244.1.2:59902 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124326s
	[INFO] 10.244.2.2:40926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109376s
	[INFO] 10.244.2.2:51410 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177337s
	[INFO] 10.244.0.4:34296 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121681s
	[INFO] 10.244.1.2:46660 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107008s
	[INFO] 10.244.1.2:58922 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127256s
	[INFO] 10.244.1.2:50299 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110499s
	
	
	==> coredns [e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d] <==
	[INFO] 10.244.2.2:48502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000993326s
	[INFO] 10.244.2.2:58814 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.00702444s
	[INFO] 10.244.1.2:38201 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001498033s
	[INFO] 10.244.1.2:46765 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000402533s
	[INFO] 10.244.1.2:60614 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001481239s
	[INFO] 10.244.2.2:59844 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000200712s
	[INFO] 10.244.2.2:41213 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139289s
	[INFO] 10.244.2.2:59870 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000168386s
	[INFO] 10.244.0.4:37158 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073468s
	[INFO] 10.244.0.4:39161 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108787s
	[INFO] 10.244.0.4:39022 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165944s
	[INFO] 10.244.0.4:57473 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073383s
	[INFO] 10.244.1.2:44098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115699s
	[INFO] 10.244.1.2:33898 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001695302s
	[INFO] 10.244.1.2:48541 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080109s
	[INFO] 10.244.2.2:54351 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000123574s
	[INFO] 10.244.0.4:59667 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011063s
	[INFO] 10.244.1.2:44877 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127432s
	[INFO] 10.244.1.2:57437 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119268s
	[INFO] 10.244.2.2:57502 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109191s
	[INFO] 10.244.2.2:34873 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084958s
	[INFO] 10.244.0.4:38163 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100276s
	[INFO] 10.244.0.4:57638 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133846s
	[INFO] 10.244.0.4:41879 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000064694s
	[INFO] 10.244.1.2:53124 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175486s
	
	
	==> describe nodes <==
	Name:               ha-597780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T16_26_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:25:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:33:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:29:33 +0000   Wed, 14 Aug 2024 16:25:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:29:33 +0000   Wed, 14 Aug 2024 16:25:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:29:33 +0000   Wed, 14 Aug 2024 16:25:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:29:33 +0000   Wed, 14 Aug 2024 16:26:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-597780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 380f2e1fef9b4a7ba6d1d939cb1bae1a
	  System UUID:                380f2e1f-ef9b-4a7b-a6d1-d939cb1bae1a
	  Boot ID:                    aa55ed43-2220-4096-a571-51cd5b70ed86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rq7wd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 coredns-6f6b679f8f-28k2m             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m55s
	  kube-system                 coredns-6f6b679f8f-kc84b             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m55s
	  kube-system                 etcd-ha-597780                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m57s
	  kube-system                 kindnet-zm75h                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m55s
	  kube-system                 kube-apiserver-ha-597780             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  kube-system                 kube-controller-manager-ha-597780    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  kube-system                 kube-proxy-79txl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	  kube-system                 kube-scheduler-ha-597780             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  kube-system                 kube-vip-ha-597780                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m53s  kube-proxy       
	  Normal  Starting                 7m57s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m57s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m57s  kubelet          Node ha-597780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m57s  kubelet          Node ha-597780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m57s  kubelet          Node ha-597780 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m56s  node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Normal  NodeReady                7m39s  kubelet          Node ha-597780 status is now: NodeReady
	  Normal  RegisteredNode           6m14s  node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Normal  RegisteredNode           5m2s   node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	
	
	Name:               ha-597780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T16_27_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:27:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:30:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 14 Aug 2024 16:29:36 +0000   Wed, 14 Aug 2024 16:31:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 14 Aug 2024 16:29:36 +0000   Wed, 14 Aug 2024 16:31:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 14 Aug 2024 16:29:36 +0000   Wed, 14 Aug 2024 16:31:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 14 Aug 2024 16:29:36 +0000   Wed, 14 Aug 2024 16:31:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-597780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a36bc81f5b549f48c64d8093b0c45f0
	  System UUID:                2a36bc81-f5b5-49f4-8c64-d8093b0c45f0
	  Boot ID:                    cbc02bb3-0be5-453b-8e50-9b929e5b8c87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w9lh2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 etcd-ha-597780-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m20s
	  kube-system                 kindnet-c8f8r                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m22s
	  kube-system                 kube-apiserver-ha-597780-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-controller-manager-ha-597780-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-proxy-4q2dq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-scheduler-ha-597780-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-vip-ha-597780-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m22s (x8 over 6m23s)  kubelet          Node ha-597780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s (x8 over 6m23s)  kubelet          Node ha-597780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s (x7 over 6m23s)  kubelet          Node ha-597780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m21s                  node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  RegisteredNode           5m2s                   node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  NodeNotReady             2m47s                  node-controller  Node ha-597780-m02 status is now: NodeNotReady
	
	
	Name:               ha-597780-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T16_28_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:28:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:33:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:29:47 +0000   Wed, 14 Aug 2024 16:28:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:29:47 +0000   Wed, 14 Aug 2024 16:28:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:29:47 +0000   Wed, 14 Aug 2024 16:28:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:29:47 +0000   Wed, 14 Aug 2024 16:29:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    ha-597780-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ad778cd276b4853bc1e6d49295cbd2e
	  System UUID:                6ad778cd-276b-4853-bc1e-6d49295cbd2e
	  Boot ID:                    bd84ee8a-9079-478b-80c5-90f2f9e71408
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-27k42                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 etcd-ha-597780-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m8s
	  kube-system                 kindnet-2p7zj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m10s
	  kube-system                 kube-apiserver-ha-597780-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-controller-manager-ha-597780-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-proxy-97tjj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-scheduler-ha-597780-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m9s
	  kube-system                 kube-vip-ha-597780-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m10s (x8 over 5m10s)  kubelet          Node ha-597780-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m10s (x8 over 5m10s)  kubelet          Node ha-597780-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m10s (x7 over 5m10s)  kubelet          Node ha-597780-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-597780-m03 event: Registered Node ha-597780-m03 in Controller
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-597780-m03 event: Registered Node ha-597780-m03 in Controller
	  Normal  RegisteredNode           5m2s                   node-controller  Node ha-597780-m03 event: Registered Node ha-597780-m03 in Controller
	
	
	Name:               ha-597780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T16_29_55_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:29:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:33:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:30:25 +0000   Wed, 14 Aug 2024 16:29:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:30:25 +0000   Wed, 14 Aug 2024 16:29:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:30:25 +0000   Wed, 14 Aug 2024 16:29:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:30:25 +0000   Wed, 14 Aug 2024 16:30:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.209
	  Hostname:    ha-597780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0fa932f445844ff7a66a64ac6cdf169b
	  System UUID:                0fa932f4-4584-4ff7-a66a-64ac6cdf169b
	  Boot ID:                    305597ed-d6ab-49f8-ae00-26804526aa5c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5x5s7       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m2s
	  kube-system                 kube-proxy-bmf62    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m58s                kube-proxy       
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal  NodeHasSufficientMemory  4m2s (x8 over 4m3s)  kubelet          Node ha-597780-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x8 over 4m3s)  kubelet          Node ha-597780-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x7 over 4m3s)  kubelet          Node ha-597780-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	
	
	==> dmesg <==
	[Aug14 16:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050534] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036884] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.713616] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.759222] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.575706] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.613825] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.065926] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069239] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.173403] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.130531] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.250569] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +3.824868] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +3.756438] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.057963] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.054111] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.086455] kauditd_printk_skb: 79 callbacks suppressed
	[Aug14 16:26] kauditd_printk_skb: 62 callbacks suppressed
	[Aug14 16:27] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905] <==
	{"level":"warn","ts":"2024-08-14T16:33:56.353747Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.451383Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.538712Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.540604Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.551478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.585458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.592197Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.595436Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.606652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.613034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.640328Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.646419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.650685Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.651459Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.654495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.661348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.667807Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.700518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.714549Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.727471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.737613Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.750500Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.752402Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.757814Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:33:56.759936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 16:33:56 up 8 min,  0 users,  load average: 0.20, 0.26, 0.18
	Linux ha-597780 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1] <==
	I0814 16:33:17.363493       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:33:27.366807       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:33:27.366840       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	I0814 16:33:27.367038       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:33:27.367068       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:33:27.367275       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:33:27.367303       1 main.go:299] handling current node
	I0814 16:33:27.367321       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:33:27.367327       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:33:37.366834       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:33:37.367009       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:33:37.367446       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:33:37.367499       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	I0814 16:33:37.367611       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:33:37.367631       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:33:37.367705       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:33:37.367724       1 main.go:299] handling current node
	I0814 16:33:47.365544       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:33:47.365627       1 main.go:299] handling current node
	I0814 16:33:47.365655       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:33:47.365663       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:33:47.365841       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:33:47.365861       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	I0814 16:33:47.365914       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:33:47.365919       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4ad80a864cc602ff3ed5231f18c40e60acb39b91e37eb9ecf4ac327c268587ea] <==
	W0814 16:25:55.762483       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.4]
	I0814 16:25:55.763280       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 16:25:55.769816       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0814 16:25:56.090514       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0814 16:25:59.885282       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0814 16:25:59.904974       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0814 16:25:59.913868       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0814 16:26:01.337367       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0814 16:26:01.746147       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0814 16:29:19.129394       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41740: use of closed network connection
	E0814 16:29:19.372915       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41758: use of closed network connection
	E0814 16:29:19.550858       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41774: use of closed network connection
	E0814 16:29:19.734746       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41792: use of closed network connection
	E0814 16:29:19.909648       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41808: use of closed network connection
	E0814 16:29:20.076996       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41828: use of closed network connection
	E0814 16:29:20.246071       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41844: use of closed network connection
	E0814 16:29:20.411630       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41862: use of closed network connection
	E0814 16:29:20.589195       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41888: use of closed network connection
	E0814 16:29:20.865814       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41914: use of closed network connection
	E0814 16:29:21.043561       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41936: use of closed network connection
	E0814 16:29:21.218997       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41966: use of closed network connection
	E0814 16:29:21.388922       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41992: use of closed network connection
	E0814 16:29:21.560524       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42000: use of closed network connection
	E0814 16:29:21.735637       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42006: use of closed network connection
	W0814 16:30:45.784712       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.167 192.168.39.4]
	
	
	==> kube-controller-manager [72903e605408111be84917c525af67e79889822f24a9cf8ba1b60605ecc495fd] <==
	I0814 16:29:54.600855       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-597780-m04" podCIDRs=["10.244.3.0/24"]
	I0814 16:29:54.600902       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:54.600975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:54.620307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:54.838715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:55.055737       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:55.435910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:55.972247       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:55.973019       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-597780-m04"
	I0814 16:29:56.014610       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:57.234815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:29:57.322839       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:30:04.884780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:30:13.554499       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:30:13.555272       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-597780-m04"
	I0814 16:30:13.570596       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:30:14.778502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:30:25.273275       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:31:09.806307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m02"
	I0814 16:31:09.806702       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-597780-m04"
	I0814 16:31:09.827514       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m02"
	I0814 16:31:09.884153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.333879ms"
	I0814 16:31:09.885393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.009µs"
	I0814 16:31:11.084544       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m02"
	I0814 16:31:15.055155       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m02"
	
	
	==> kube-proxy [37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 16:26:02.673675       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 16:26:02.694314       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.4"]
	E0814 16:26:02.694393       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 16:26:02.727764       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 16:26:02.727815       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 16:26:02.727845       1 server_linux.go:169] "Using iptables Proxier"
	I0814 16:26:02.729922       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 16:26:02.730197       1 server.go:483] "Version info" version="v1.31.0"
	I0814 16:26:02.730270       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:26:02.732001       1 config.go:197] "Starting service config controller"
	I0814 16:26:02.732031       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 16:26:02.732048       1 config.go:104] "Starting endpoint slice config controller"
	I0814 16:26:02.732051       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 16:26:02.734298       1 config.go:326] "Starting node config controller"
	I0814 16:26:02.734385       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 16:26:02.832657       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 16:26:02.832736       1 shared_informer.go:320] Caches are synced for service config
	I0814 16:26:02.834437       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e] <==
	W0814 16:25:55.124761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 16:25:55.124856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:25:55.134951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 16:25:55.135030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:25:55.234922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 16:25:55.235107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 16:25:55.275533       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 16:25:55.275674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:25:55.384531       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 16:25:55.384674       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 16:25:55.440408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 16:25:55.440501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0814 16:25:57.150779       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0814 16:29:14.511741       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9lh2\": pod busybox-7dff88458-w9lh2 is already assigned to node \"ha-597780-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-w9lh2" node="ha-597780-m02"
	E0814 16:29:14.513586       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d61c6e28-3a9c-47b5-ad97-6d1c77c30857(default/busybox-7dff88458-w9lh2) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-w9lh2"
	E0814 16:29:14.513669       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9lh2\": pod busybox-7dff88458-w9lh2 is already assigned to node \"ha-597780-m02\"" pod="default/busybox-7dff88458-w9lh2"
	I0814 16:29:14.513886       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-w9lh2" node="ha-597780-m02"
	E0814 16:29:14.544849       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-27k42\": pod busybox-7dff88458-27k42 is already assigned to node \"ha-597780-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-27k42" node="ha-597780-m03"
	E0814 16:29:14.544959       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-27k42\": pod busybox-7dff88458-27k42 is already assigned to node \"ha-597780-m03\"" pod="default/busybox-7dff88458-27k42"
	E0814 16:29:14.545719       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rq7wd\": pod busybox-7dff88458-rq7wd is already assigned to node \"ha-597780\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rq7wd" node="ha-597780"
	E0814 16:29:14.557325       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rq7wd\": pod busybox-7dff88458-rq7wd is already assigned to node \"ha-597780\"" pod="default/busybox-7dff88458-rq7wd"
	E0814 16:29:54.657005       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5x5s7\": pod kindnet-5x5s7 is already assigned to node \"ha-597780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5x5s7" node="ha-597780-m04"
	E0814 16:29:54.657112       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 45af1890-2443-48af-a4f1-38ce0ab0f558(kube-system/kindnet-5x5s7) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5x5s7"
	E0814 16:29:54.657139       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5x5s7\": pod kindnet-5x5s7 is already assigned to node \"ha-597780-m04\"" pod="kube-system/kindnet-5x5s7"
	I0814 16:29:54.657164       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5x5s7" node="ha-597780-m04"
	
	
	==> kubelet <==
	Aug 14 16:32:19 ha-597780 kubelet[1315]: E0814 16:32:19.990013    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653139989535570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:19 ha-597780 kubelet[1315]: E0814 16:32:19.990065    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653139989535570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:29 ha-597780 kubelet[1315]: E0814 16:32:29.991918    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653149991668287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:29 ha-597780 kubelet[1315]: E0814 16:32:29.991990    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653149991668287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:39 ha-597780 kubelet[1315]: E0814 16:32:39.994199    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653159993729013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:39 ha-597780 kubelet[1315]: E0814 16:32:39.994853    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653159993729013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:49 ha-597780 kubelet[1315]: E0814 16:32:49.996731    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653169996427038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:49 ha-597780 kubelet[1315]: E0814 16:32:49.997002    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653169996427038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:59 ha-597780 kubelet[1315]: E0814 16:32:59.864985    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 16:32:59 ha-597780 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 16:32:59 ha-597780 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 16:32:59 ha-597780 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 16:32:59 ha-597780 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 16:32:59 ha-597780 kubelet[1315]: E0814 16:32:59.998840    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653179998609571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:32:59 ha-597780 kubelet[1315]: E0814 16:32:59.998878    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653179998609571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:33:10 ha-597780 kubelet[1315]: E0814 16:33:10.001604    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653190000973518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:33:10 ha-597780 kubelet[1315]: E0814 16:33:10.001668    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653190000973518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:33:20 ha-597780 kubelet[1315]: E0814 16:33:20.003845    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653200003524147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:33:20 ha-597780 kubelet[1315]: E0814 16:33:20.004193    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653200003524147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:33:30 ha-597780 kubelet[1315]: E0814 16:33:30.006652    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653210006316846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:33:30 ha-597780 kubelet[1315]: E0814 16:33:30.006970    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653210006316846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:33:40 ha-597780 kubelet[1315]: E0814 16:33:40.009395    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653220009038712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:33:40 ha-597780 kubelet[1315]: E0814 16:33:40.009431    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653220009038712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:33:50 ha-597780 kubelet[1315]: E0814 16:33:50.011509    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653230011046221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:33:50 ha-597780 kubelet[1315]: E0814 16:33:50.011804    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653230011046221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-597780 -n ha-597780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-597780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (59.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (417.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-597780 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-597780 -v=7 --alsologtostderr
E0814 16:34:29.459582   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:34:57.162757   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-597780 -v=7 --alsologtostderr: exit status 82 (2m1.783445014s)

                                                
                                                
-- stdout --
	* Stopping node "ha-597780-m04"  ...
	* Stopping node "ha-597780-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:33:58.192463   37861 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:33:58.192569   37861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:58.192578   37861 out.go:304] Setting ErrFile to fd 2...
	I0814 16:33:58.192583   37861 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:33:58.192759   37861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:33:58.192987   37861 out.go:298] Setting JSON to false
	I0814 16:33:58.193075   37861 mustload.go:65] Loading cluster: ha-597780
	I0814 16:33:58.193415   37861 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:33:58.193497   37861 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:33:58.193685   37861 mustload.go:65] Loading cluster: ha-597780
	I0814 16:33:58.193851   37861 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:33:58.193896   37861 stop.go:39] StopHost: ha-597780-m04
	I0814 16:33:58.194258   37861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:58.194302   37861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:58.208798   37861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34191
	I0814 16:33:58.209293   37861 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:58.209869   37861 main.go:141] libmachine: Using API Version  1
	I0814 16:33:58.209890   37861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:58.210214   37861 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:58.212986   37861 out.go:177] * Stopping node "ha-597780-m04"  ...
	I0814 16:33:58.214140   37861 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 16:33:58.214170   37861 main.go:141] libmachine: (ha-597780-m04) Calling .DriverName
	I0814 16:33:58.214396   37861 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 16:33:58.214424   37861 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHHostname
	I0814 16:33:58.217616   37861 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:58.218039   37861 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:29:36 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:33:58.218070   37861 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:33:58.218213   37861 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHPort
	I0814 16:33:58.218386   37861 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHKeyPath
	I0814 16:33:58.218536   37861 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHUsername
	I0814 16:33:58.218700   37861 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m04/id_rsa Username:docker}
	I0814 16:33:58.305855   37861 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 16:33:58.358284   37861 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 16:33:58.411577   37861 main.go:141] libmachine: Stopping "ha-597780-m04"...
	I0814 16:33:58.411605   37861 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:33:58.413065   37861 main.go:141] libmachine: (ha-597780-m04) Calling .Stop
	I0814 16:33:58.417033   37861 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 0/120
	I0814 16:33:59.519906   37861 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:33:59.521275   37861 main.go:141] libmachine: Machine "ha-597780-m04" was stopped.
	I0814 16:33:59.521292   37861 stop.go:75] duration metric: took 1.307153521s to stop
	I0814 16:33:59.521309   37861 stop.go:39] StopHost: ha-597780-m03
	I0814 16:33:59.521602   37861 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:33:59.521645   37861 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:33:59.536348   37861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39429
	I0814 16:33:59.536735   37861 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:33:59.537193   37861 main.go:141] libmachine: Using API Version  1
	I0814 16:33:59.537214   37861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:33:59.537494   37861 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:33:59.539408   37861 out.go:177] * Stopping node "ha-597780-m03"  ...
	I0814 16:33:59.540696   37861 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 16:33:59.540716   37861 main.go:141] libmachine: (ha-597780-m03) Calling .DriverName
	I0814 16:33:59.540925   37861 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 16:33:59.540944   37861 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHHostname
	I0814 16:33:59.543747   37861 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:59.544170   37861 main.go:141] libmachine: (ha-597780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:61:b4", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:28:12 +0000 UTC Type:0 Mac:52:54:00:e0:61:b4 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:ha-597780-m03 Clientid:01:52:54:00:e0:61:b4}
	I0814 16:33:59.544200   37861 main.go:141] libmachine: (ha-597780-m03) DBG | domain ha-597780-m03 has defined IP address 192.168.39.167 and MAC address 52:54:00:e0:61:b4 in network mk-ha-597780
	I0814 16:33:59.544338   37861 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHPort
	I0814 16:33:59.544503   37861 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHKeyPath
	I0814 16:33:59.544633   37861 main.go:141] libmachine: (ha-597780-m03) Calling .GetSSHUsername
	I0814 16:33:59.544741   37861 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m03/id_rsa Username:docker}
	I0814 16:33:59.630024   37861 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 16:33:59.682324   37861 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 16:33:59.736783   37861 main.go:141] libmachine: Stopping "ha-597780-m03"...
	I0814 16:33:59.736807   37861 main.go:141] libmachine: (ha-597780-m03) Calling .GetState
	I0814 16:33:59.738477   37861 main.go:141] libmachine: (ha-597780-m03) Calling .Stop
	I0814 16:33:59.742642   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 0/120
	I0814 16:34:00.743996   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 1/120
	I0814 16:34:01.745419   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 2/120
	I0814 16:34:02.746762   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 3/120
	I0814 16:34:03.748181   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 4/120
	I0814 16:34:04.750014   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 5/120
	I0814 16:34:05.751161   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 6/120
	I0814 16:34:06.753006   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 7/120
	I0814 16:34:07.754441   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 8/120
	I0814 16:34:08.756006   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 9/120
	I0814 16:34:09.758131   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 10/120
	I0814 16:34:10.759368   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 11/120
	I0814 16:34:11.760735   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 12/120
	I0814 16:34:12.762065   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 13/120
	I0814 16:34:13.763415   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 14/120
	I0814 16:34:14.765893   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 15/120
	I0814 16:34:15.767262   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 16/120
	I0814 16:34:16.768811   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 17/120
	I0814 16:34:17.770428   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 18/120
	I0814 16:34:18.772122   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 19/120
	I0814 16:34:19.774550   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 20/120
	I0814 16:34:20.776572   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 21/120
	I0814 16:34:21.778027   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 22/120
	I0814 16:34:22.779358   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 23/120
	I0814 16:34:23.780702   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 24/120
	I0814 16:34:24.782368   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 25/120
	I0814 16:34:25.783898   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 26/120
	I0814 16:34:26.785479   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 27/120
	I0814 16:34:27.787118   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 28/120
	I0814 16:34:28.788567   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 29/120
	I0814 16:34:29.790257   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 30/120
	I0814 16:34:30.791878   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 31/120
	I0814 16:34:31.793882   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 32/120
	I0814 16:34:32.795371   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 33/120
	I0814 16:34:33.796798   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 34/120
	I0814 16:34:34.798177   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 35/120
	I0814 16:34:35.799352   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 36/120
	I0814 16:34:36.800665   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 37/120
	I0814 16:34:37.802246   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 38/120
	I0814 16:34:38.803543   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 39/120
	I0814 16:34:39.805220   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 40/120
	I0814 16:34:40.806441   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 41/120
	I0814 16:34:41.807605   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 42/120
	I0814 16:34:42.809592   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 43/120
	I0814 16:34:43.811310   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 44/120
	I0814 16:34:44.812987   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 45/120
	I0814 16:34:45.814394   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 46/120
	I0814 16:34:46.815636   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 47/120
	I0814 16:34:47.817659   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 48/120
	I0814 16:34:48.818734   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 49/120
	I0814 16:34:49.820252   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 50/120
	I0814 16:34:50.821672   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 51/120
	I0814 16:34:51.822974   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 52/120
	I0814 16:34:52.824311   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 53/120
	I0814 16:34:53.825438   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 54/120
	I0814 16:34:54.827042   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 55/120
	I0814 16:34:55.828407   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 56/120
	I0814 16:34:56.830126   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 57/120
	I0814 16:34:57.831396   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 58/120
	I0814 16:34:58.832534   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 59/120
	I0814 16:34:59.834299   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 60/120
	I0814 16:35:00.835642   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 61/120
	I0814 16:35:01.837643   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 62/120
	I0814 16:35:02.838899   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 63/120
	I0814 16:35:03.840183   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 64/120
	I0814 16:35:04.841990   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 65/120
	I0814 16:35:05.843279   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 66/120
	I0814 16:35:06.844556   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 67/120
	I0814 16:35:07.846117   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 68/120
	I0814 16:35:08.847696   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 69/120
	I0814 16:35:09.849572   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 70/120
	I0814 16:35:10.850841   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 71/120
	I0814 16:35:11.852566   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 72/120
	I0814 16:35:12.853765   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 73/120
	I0814 16:35:13.854896   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 74/120
	I0814 16:35:14.856672   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 75/120
	I0814 16:35:15.857870   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 76/120
	I0814 16:35:16.859188   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 77/120
	I0814 16:35:17.860601   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 78/120
	I0814 16:35:18.861894   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 79/120
	I0814 16:35:19.863600   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 80/120
	I0814 16:35:20.864939   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 81/120
	I0814 16:35:21.866296   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 82/120
	I0814 16:35:22.867679   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 83/120
	I0814 16:35:23.869047   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 84/120
	I0814 16:35:24.871022   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 85/120
	I0814 16:35:25.872287   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 86/120
	I0814 16:35:26.873762   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 87/120
	I0814 16:35:27.875153   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 88/120
	I0814 16:35:28.876595   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 89/120
	I0814 16:35:29.878327   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 90/120
	I0814 16:35:30.879929   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 91/120
	I0814 16:35:31.881654   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 92/120
	I0814 16:35:32.883263   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 93/120
	I0814 16:35:33.884849   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 94/120
	I0814 16:35:34.886301   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 95/120
	I0814 16:35:35.887930   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 96/120
	I0814 16:35:36.889407   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 97/120
	I0814 16:35:37.891014   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 98/120
	I0814 16:35:38.892247   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 99/120
	I0814 16:35:39.894048   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 100/120
	I0814 16:35:40.895543   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 101/120
	I0814 16:35:41.896845   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 102/120
	I0814 16:35:42.898673   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 103/120
	I0814 16:35:43.900049   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 104/120
	I0814 16:35:44.901476   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 105/120
	I0814 16:35:45.902895   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 106/120
	I0814 16:35:46.904416   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 107/120
	I0814 16:35:47.906116   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 108/120
	I0814 16:35:48.907646   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 109/120
	I0814 16:35:49.909096   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 110/120
	I0814 16:35:50.910289   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 111/120
	I0814 16:35:51.911721   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 112/120
	I0814 16:35:52.913207   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 113/120
	I0814 16:35:53.914469   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 114/120
	I0814 16:35:54.916228   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 115/120
	I0814 16:35:55.918009   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 116/120
	I0814 16:35:56.919729   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 117/120
	I0814 16:35:57.921272   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 118/120
	I0814 16:35:58.922775   37861 main.go:141] libmachine: (ha-597780-m03) Waiting for machine to stop 119/120
	I0814 16:35:59.923874   37861 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0814 16:35:59.923942   37861 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0814 16:35:59.925739   37861 out.go:177] 
	W0814 16:35:59.927171   37861 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0814 16:35:59.927193   37861 out.go:239] * 
	* 
	W0814 16:35:59.929578   37861 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 16:35:59.931137   37861 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-597780 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-597780 --wait=true -v=7 --alsologtostderr
E0814 16:38:02.589052   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:39:25.654467   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:39:29.459952   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-597780 --wait=true -v=7 --alsologtostderr: (4m52.908262613s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-597780
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-597780 -n ha-597780
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-597780 logs -n 25: (1.93868544s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m02:/home/docker/cp-test_ha-597780-m03_ha-597780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m02 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m03_ha-597780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04:/home/docker/cp-test_ha-597780-m03_ha-597780-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m04 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m03_ha-597780-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-597780 cp testdata/cp-test.txt                                                | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3967682573/001/cp-test_ha-597780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780:/home/docker/cp-test_ha-597780-m04_ha-597780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780 sudo cat                                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m04_ha-597780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m02:/home/docker/cp-test_ha-597780-m04_ha-597780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m02 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m04_ha-597780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03:/home/docker/cp-test_ha-597780-m04_ha-597780-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m03 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m04_ha-597780-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-597780 node stop m02 -v=7                                                     | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-597780 node start m02 -v=7                                                    | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-597780 -v=7                                                           | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-597780 -v=7                                                                | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-597780 --wait=true -v=7                                                    | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:35 UTC | 14 Aug 24 16:40 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-597780                                                                | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:40 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 16:35:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 16:35:59.976231   38304 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:35:59.976478   38304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:35:59.976486   38304 out.go:304] Setting ErrFile to fd 2...
	I0814 16:35:59.976491   38304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:35:59.976653   38304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:35:59.977237   38304 out.go:298] Setting JSON to false
	I0814 16:35:59.978180   38304 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4704,"bootTime":1723648656,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:35:59.978233   38304 start.go:139] virtualization: kvm guest
	I0814 16:35:59.980770   38304 out.go:177] * [ha-597780] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:35:59.982118   38304 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:35:59.982133   38304 notify.go:220] Checking for updates...
	I0814 16:35:59.984435   38304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:35:59.985844   38304 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:35:59.987052   38304 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:35:59.988281   38304 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:35:59.989533   38304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:35:59.991381   38304 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:35:59.991491   38304 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:35:59.991932   38304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:35:59.992011   38304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:36:00.006624   38304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
	I0814 16:36:00.007076   38304 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:36:00.007589   38304 main.go:141] libmachine: Using API Version  1
	I0814 16:36:00.007609   38304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:36:00.008014   38304 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:36:00.008196   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:36:00.044240   38304 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 16:36:00.045543   38304 start.go:297] selected driver: kvm2
	I0814 16:36:00.045557   38304 start.go:901] validating driver "kvm2" against &{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.209 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:36:00.045733   38304 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:36:00.046169   38304 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:36:00.046256   38304 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 16:36:00.061008   38304 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 16:36:00.061723   38304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:36:00.061807   38304 cni.go:84] Creating CNI manager for ""
	I0814 16:36:00.061823   38304 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0814 16:36:00.061884   38304 start.go:340] cluster config:
	{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.209 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:36:00.062029   38304 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:36:00.063777   38304 out.go:177] * Starting "ha-597780" primary control-plane node in "ha-597780" cluster
	I0814 16:36:00.065215   38304 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:36:00.065260   38304 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:36:00.065271   38304 cache.go:56] Caching tarball of preloaded images
	I0814 16:36:00.065368   38304 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 16:36:00.065394   38304 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 16:36:00.065506   38304 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:36:00.065787   38304 start.go:360] acquireMachinesLock for ha-597780: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 16:36:00.065855   38304 start.go:364] duration metric: took 41.326µs to acquireMachinesLock for "ha-597780"
	I0814 16:36:00.065878   38304 start.go:96] Skipping create...Using existing machine configuration
	I0814 16:36:00.065902   38304 fix.go:54] fixHost starting: 
	I0814 16:36:00.066346   38304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:36:00.066395   38304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:36:00.080986   38304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37049
	I0814 16:36:00.081450   38304 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:36:00.082058   38304 main.go:141] libmachine: Using API Version  1
	I0814 16:36:00.082080   38304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:36:00.082479   38304 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:36:00.082723   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:36:00.082905   38304 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:36:00.084804   38304 fix.go:112] recreateIfNeeded on ha-597780: state=Running err=<nil>
	W0814 16:36:00.084825   38304 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 16:36:00.087430   38304 out.go:177] * Updating the running kvm2 "ha-597780" VM ...
	I0814 16:36:00.088754   38304 machine.go:94] provisionDockerMachine start ...
	I0814 16:36:00.088769   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:36:00.088949   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:36:00.091354   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.091786   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.091815   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.091949   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:36:00.092133   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.092301   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.092436   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:36:00.092601   38304 main.go:141] libmachine: Using SSH client type: native
	I0814 16:36:00.092774   38304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:36:00.092785   38304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 16:36:00.196491   38304 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-597780
	
	I0814 16:36:00.196523   38304 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:36:00.196796   38304 buildroot.go:166] provisioning hostname "ha-597780"
	I0814 16:36:00.196837   38304 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:36:00.197039   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:36:00.199656   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.199982   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.200009   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.200167   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:36:00.200352   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.200500   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.200616   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:36:00.200755   38304 main.go:141] libmachine: Using SSH client type: native
	I0814 16:36:00.200920   38304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:36:00.200932   38304 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-597780 && echo "ha-597780" | sudo tee /etc/hostname
	I0814 16:36:00.314158   38304 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-597780
	
	I0814 16:36:00.314187   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:36:00.317090   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.317426   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.317452   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.317703   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:36:00.317904   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.318059   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.318232   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:36:00.318415   38304 main.go:141] libmachine: Using SSH client type: native
	I0814 16:36:00.318635   38304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:36:00.318656   38304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-597780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-597780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-597780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 16:36:00.415943   38304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:36:00.415972   38304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 16:36:00.416003   38304 buildroot.go:174] setting up certificates
	I0814 16:36:00.416018   38304 provision.go:84] configureAuth start
	I0814 16:36:00.416027   38304 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:36:00.416307   38304 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:36:00.418868   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.419237   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.419274   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.419447   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:36:00.421573   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.422025   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.422051   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.422188   38304 provision.go:143] copyHostCerts
	I0814 16:36:00.422220   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:36:00.422251   38304 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 16:36:00.422259   38304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:36:00.422322   38304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 16:36:00.422426   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:36:00.422453   38304 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 16:36:00.422459   38304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:36:00.422499   38304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 16:36:00.422586   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:36:00.422609   38304 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 16:36:00.422617   38304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:36:00.422654   38304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 16:36:00.422747   38304 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.ha-597780 san=[127.0.0.1 192.168.39.4 ha-597780 localhost minikube]
	I0814 16:36:00.512554   38304 provision.go:177] copyRemoteCerts
	I0814 16:36:00.512615   38304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 16:36:00.512638   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:36:00.515444   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.515823   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.515851   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.516076   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:36:00.516265   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.516439   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:36:00.516582   38304 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:36:00.597306   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 16:36:00.597379   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 16:36:00.620191   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 16:36:00.620250   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 16:36:00.645421   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 16:36:00.645482   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0814 16:36:00.668197   38304 provision.go:87] duration metric: took 252.165479ms to configureAuth
	I0814 16:36:00.668230   38304 buildroot.go:189] setting minikube options for container-runtime
	I0814 16:36:00.668516   38304 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:36:00.668608   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:36:00.671433   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.671869   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.671900   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.672111   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:36:00.672275   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.672408   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.672541   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:36:00.672742   38304 main.go:141] libmachine: Using SSH client type: native
	I0814 16:36:00.672942   38304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:36:00.672968   38304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 16:37:31.423958   38304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 16:37:31.423988   38304 machine.go:97] duration metric: took 1m31.335222511s to provisionDockerMachine
	I0814 16:37:31.424000   38304 start.go:293] postStartSetup for "ha-597780" (driver="kvm2")
	I0814 16:37:31.424011   38304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 16:37:31.424028   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:37:31.424392   38304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 16:37:31.424416   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:37:31.427833   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.428310   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:31.428336   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.428500   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:37:31.428673   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:37:31.428812   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:37:31.428962   38304 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:37:31.510576   38304 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 16:37:31.514529   38304 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 16:37:31.514557   38304 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 16:37:31.514619   38304 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 16:37:31.514719   38304 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 16:37:31.514732   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /etc/ssl/certs/211772.pem
	I0814 16:37:31.514858   38304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 16:37:31.524175   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:37:31.547624   38304 start.go:296] duration metric: took 123.609641ms for postStartSetup
	I0814 16:37:31.547670   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:37:31.547948   38304 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0814 16:37:31.547972   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:37:31.550732   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.551052   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:31.551074   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.551273   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:37:31.551477   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:37:31.551650   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:37:31.551795   38304 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	W0814 16:37:31.629152   38304 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0814 16:37:31.629175   38304 fix.go:56] duration metric: took 1m31.563287641s for fixHost
	I0814 16:37:31.629195   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:37:31.632193   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.632539   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:31.632577   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.632732   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:37:31.632919   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:37:31.633105   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:37:31.633248   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:37:31.633418   38304 main.go:141] libmachine: Using SSH client type: native
	I0814 16:37:31.633629   38304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:37:31.633645   38304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 16:37:31.731807   38304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723653451.684969952
	
	I0814 16:37:31.731830   38304 fix.go:216] guest clock: 1723653451.684969952
	I0814 16:37:31.731837   38304 fix.go:229] Guest: 2024-08-14 16:37:31.684969952 +0000 UTC Remote: 2024-08-14 16:37:31.629181773 +0000 UTC m=+91.687471026 (delta=55.788179ms)
	I0814 16:37:31.731855   38304 fix.go:200] guest clock delta is within tolerance: 55.788179ms
	I0814 16:37:31.731861   38304 start.go:83] releasing machines lock for "ha-597780", held for 1m31.665992819s
	I0814 16:37:31.731884   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:37:31.732143   38304 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:37:31.735105   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.735542   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:31.735577   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.735757   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:37:31.736254   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:37:31.736461   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:37:31.736577   38304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 16:37:31.736621   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:37:31.736674   38304 ssh_runner.go:195] Run: cat /version.json
	I0814 16:37:31.736697   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:37:31.739283   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.739410   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.739779   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:31.739842   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:31.739865   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.739881   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.739944   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:37:31.740074   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:37:31.740142   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:37:31.740226   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:37:31.740315   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:37:31.740373   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:37:31.740496   38304 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:37:31.740554   38304 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:37:31.852886   38304 ssh_runner.go:195] Run: systemctl --version
	I0814 16:37:31.858678   38304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 16:37:32.016098   38304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 16:37:32.023291   38304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 16:37:32.023371   38304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:37:32.031883   38304 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0814 16:37:32.031901   38304 start.go:495] detecting cgroup driver to use...
	I0814 16:37:32.031958   38304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 16:37:32.046647   38304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 16:37:32.059641   38304 docker.go:217] disabling cri-docker service (if available) ...
	I0814 16:37:32.059699   38304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 16:37:32.072485   38304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 16:37:32.085345   38304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 16:37:32.234125   38304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 16:37:32.370411   38304 docker.go:233] disabling docker service ...
	I0814 16:37:32.370495   38304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 16:37:32.386049   38304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 16:37:32.399257   38304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 16:37:32.537900   38304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 16:37:32.677132   38304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 16:37:32.690524   38304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 16:37:32.708081   38304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 16:37:32.708142   38304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.718154   38304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 16:37:32.718222   38304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.728032   38304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.737888   38304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.747340   38304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 16:37:32.757112   38304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.767552   38304 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.777965   38304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.787811   38304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 16:37:32.797641   38304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 16:37:32.806785   38304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:37:32.951248   38304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 16:37:33.225205   38304 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 16:37:33.225268   38304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 16:37:33.231641   38304 start.go:563] Will wait 60s for crictl version
	I0814 16:37:33.231685   38304 ssh_runner.go:195] Run: which crictl
	I0814 16:37:33.235367   38304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 16:37:33.271002   38304 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 16:37:33.271090   38304 ssh_runner.go:195] Run: crio --version
	I0814 16:37:33.299017   38304 ssh_runner.go:195] Run: crio --version
	I0814 16:37:33.330758   38304 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 16:37:33.332218   38304 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:37:33.335407   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:33.335852   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:33.335879   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:33.336090   38304 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 16:37:33.340785   38304 kubeadm.go:883] updating cluster {Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.209 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 16:37:33.340924   38304 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:37:33.340965   38304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:37:33.385162   38304 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 16:37:33.385187   38304 crio.go:433] Images already preloaded, skipping extraction
	I0814 16:37:33.385244   38304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:37:33.421801   38304 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 16:37:33.421825   38304 cache_images.go:84] Images are preloaded, skipping loading
	I0814 16:37:33.421833   38304 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.31.0 crio true true} ...
	I0814 16:37:33.421955   38304 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-597780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 16:37:33.422038   38304 ssh_runner.go:195] Run: crio config
	I0814 16:37:33.473783   38304 cni.go:84] Creating CNI manager for ""
	I0814 16:37:33.473807   38304 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0814 16:37:33.473819   38304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 16:37:33.473849   38304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-597780 NodeName:ha-597780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 16:37:33.473985   38304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-597780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 16:37:33.474002   38304 kube-vip.go:115] generating kube-vip config ...
	I0814 16:37:33.474047   38304 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0814 16:37:33.485053   38304 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0814 16:37:33.485190   38304 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0814 16:37:33.485245   38304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 16:37:33.494772   38304 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 16:37:33.494837   38304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0814 16:37:33.503758   38304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0814 16:37:33.520141   38304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 16:37:33.536125   38304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0814 16:37:33.553365   38304 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0814 16:37:33.569739   38304 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0814 16:37:33.574714   38304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:37:33.722569   38304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:37:33.737273   38304 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780 for IP: 192.168.39.4
	I0814 16:37:33.737305   38304 certs.go:194] generating shared ca certs ...
	I0814 16:37:33.737328   38304 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:37:33.737516   38304 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 16:37:33.737595   38304 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 16:37:33.737613   38304 certs.go:256] generating profile certs ...
	I0814 16:37:33.737743   38304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key
	I0814 16:37:33.737783   38304 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.3fce3d93
	I0814 16:37:33.737815   38304 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.3fce3d93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.225 192.168.39.167 192.168.39.254]
	I0814 16:37:33.979222   38304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.3fce3d93 ...
	I0814 16:37:33.979256   38304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.3fce3d93: {Name:mkb87fe715cb554aa1237444086f355a72cf705b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:37:33.979464   38304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.3fce3d93 ...
	I0814 16:37:33.979481   38304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.3fce3d93: {Name:mk777447d0b1ce75f45ec8e2dd80f852f96d3182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:37:33.979573   38304 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.3fce3d93 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt
	I0814 16:37:33.979742   38304 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.3fce3d93 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key
	I0814 16:37:33.979882   38304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key
	I0814 16:37:33.979898   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 16:37:33.979912   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 16:37:33.979926   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 16:37:33.979942   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 16:37:33.979954   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 16:37:33.979969   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 16:37:33.979981   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 16:37:33.979992   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 16:37:33.980057   38304 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 16:37:33.980107   38304 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 16:37:33.980119   38304 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 16:37:33.980168   38304 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 16:37:33.980195   38304 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 16:37:33.980223   38304 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 16:37:33.980266   38304 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:37:33.980306   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:37:33.980324   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem -> /usr/share/ca-certificates/21177.pem
	I0814 16:37:33.980336   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /usr/share/ca-certificates/211772.pem
	I0814 16:37:33.980895   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 16:37:34.006237   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 16:37:34.029028   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 16:37:34.051558   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 16:37:34.074765   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 16:37:34.098415   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 16:37:34.120496   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 16:37:34.143834   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 16:37:34.166784   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 16:37:34.189574   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 16:37:34.212275   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 16:37:34.234445   38304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 16:37:34.249584   38304 ssh_runner.go:195] Run: openssl version
	I0814 16:37:34.255027   38304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 16:37:34.264747   38304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:37:34.269179   38304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:37:34.269226   38304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:37:34.274326   38304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 16:37:34.282590   38304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 16:37:34.292091   38304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 16:37:34.296191   38304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 16:37:34.296236   38304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 16:37:34.301356   38304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 16:37:34.309784   38304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 16:37:34.319919   38304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 16:37:34.323712   38304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 16:37:34.323746   38304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 16:37:34.329232   38304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 16:37:34.337955   38304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 16:37:34.342042   38304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 16:37:34.349900   38304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 16:37:34.358163   38304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 16:37:34.367648   38304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 16:37:34.376229   38304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 16:37:34.384894   38304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 16:37:34.395015   38304 kubeadm.go:392] StartCluster: {Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.209 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:37:34.395196   38304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 16:37:34.395253   38304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 16:37:34.470623   38304 cri.go:89] found id: "1e41add6fd3aadca92ebd87cbc3ca06c6e52b5219af598bc986a599626b3fea0"
	I0814 16:37:34.470647   38304 cri.go:89] found id: "c12bbd0fec638d4c0fa1fe3168f7f54b850d776c969bab8fb5a129fd9a1ff017"
	I0814 16:37:34.470651   38304 cri.go:89] found id: "2523827ba24c337126d2deaf39a69d56b9b5730b94440e598ae0a21caa13a627"
	I0814 16:37:34.470655   38304 cri.go:89] found id: "422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c"
	I0814 16:37:34.470658   38304 cri.go:89] found id: "e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d"
	I0814 16:37:34.470663   38304 cri.go:89] found id: "fdde6ae1e8d74427216ede0d7dad128cd2183769f04fab964ea0060a3dd2b1ee"
	I0814 16:37:34.470669   38304 cri.go:89] found id: "9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1"
	I0814 16:37:34.470674   38304 cri.go:89] found id: "37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb"
	I0814 16:37:34.470679   38304 cri.go:89] found id: "f67f9d9915d534085918d0529b19548940cd4887f3fcff515d5c5cf62eece770"
	I0814 16:37:34.470691   38304 cri.go:89] found id: "be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905"
	I0814 16:37:34.470697   38304 cri.go:89] found id: "72903e605408111be84917c525af67e79889822f24a9cf8ba1b60605ecc495fd"
	I0814 16:37:34.470702   38304 cri.go:89] found id: "9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e"
	I0814 16:37:34.470708   38304 cri.go:89] found id: "4ad80a864cc602ff3ed5231f18c40e60acb39b91e37eb9ecf4ac327c268587ea"
	I0814 16:37:34.470715   38304 cri.go:89] found id: ""
	I0814 16:37:34.470761   38304 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.596830198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59a0a655-9667-41b1-ab7d-7f65c8b4189e name=/runtime.v1.RuntimeService/Version
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.598594963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08c8bbe3-7253-461f-b2f2-de0a681d224b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.599407978Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653653599372175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08c8bbe3-7253-461f-b2f2-de0a681d224b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.600048045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3600a977-50ce-44f1-aabc-334ede65d60d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.600131162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3600a977-50ce-44f1-aabc-334ede65d60d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.601702636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d7a047a63d4f358401ba14edbe7ae75853efb926363557abe896e917a35c6e1,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723653531871374687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0090111a9078cd7d7114e8e41eba8b0e3e9244a6d56c800001d55c647de047,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723653502868507192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0feebc4c91acc20973f940c45d9b14cd44c58400f983e72d31ca4be3ec4fd4b1,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723653501865848163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2443fafb925cc387eea7c3e1f71a41139be3161d3ba5fde8e40940fb2d07970b,PodSandboxId:e2479ec996bb180972116be2f16961d9414ef84345e1873b2e61fe87616f6fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723653491125823576,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981eb8296cdeb6d40401b0a529c6358f12551effc26a6a2c5217c4bcd27779ce,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723653490860603676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31cdbd2a724ef44a5f78908dc3852ec9665db36cf9096de1f2e03f97d304b3,PodSandboxId:69b675c5debdafe5c79208c06321cddca332e097a71edf3f8913724a3cefd86d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723653468195833639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6eda7162bf969e95f0578138dd8c6ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4f3f03c5946821483db35d82adadf94e716c80acefdfa9b86eeca5126ebdea,PodSandboxId:d58a265d2473cd71dbd422a2a7066f73f19e42e351c0631f89110b23ca227b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723653458910000622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:71c507d68d37b6072cf0b51abc2fff7f57582c574a8ec265020f3676b0d5682f,PodSandboxId:fd01497642c1d80c907572a4d3306fec7914bdb073b6a4bd0de2d777fa5d4958,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723653457889718585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e4820d9
35853d422990adfe150efcf30cf4f9e5d613b73f919609928c16df7,PodSandboxId:749b6336be4d88594fdf5f67a1f64f8fe9b307a1d090b2511b034dd05ce413b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653457839833373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047dd2746b2ff4c8d2e079bf9e0be2e3f51cb4e115f58578ac5fc150d0b5ec89,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723653457705328500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d453751eb78a43af3188f0c9f5c0f9ded6beb22938705c7c95989b7681bc2e,PodSandboxId:14b128d6cb5027649ee08e04f38180e670b5fb57031cb53668b1f942bd4245f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723653457660851153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f82521456895dcd96d833472a98b47f70324216f760e52a3f5d261531298f,PodSandboxId:6e9c89800b459955c596655cc3cee47f63fd440204b88153673e89ad5eb175f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723653457646958209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1bda5de444ee7b1f76b21acfc57a04e9f13279c7d1c868858a723a1af6d5b0,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723653457539829908,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f031e182fbc1d4e970b42cad69f5b0b5bd9c992b61b42337fd35916e56ef8579,PodSandboxId:9c9eb56944555998bd25081c57daf5bf25e04dcac2037f576690941fd2f65ae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653454561571505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723652958530125849,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778224096082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778200790933,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723652766365339973,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723652762447302359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723652750804163125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723652750785390188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3600a977-50ce-44f1-aabc-334ede65d60d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.644768093Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3872519a-3a53-4b67-9eb2-64b65712968f name=/runtime.v1.RuntimeService/Version
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.644844846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3872519a-3a53-4b67-9eb2-64b65712968f name=/runtime.v1.RuntimeService/Version
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.645904865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f2d0cb3-6e84-407c-b4ee-6f867208a1fa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.646503810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653653646475963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f2d0cb3-6e84-407c-b4ee-6f867208a1fa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.647093824Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25d9ca55-f23f-440b-863c-fda2955a20ef name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.647173606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25d9ca55-f23f-440b-863c-fda2955a20ef name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.647773723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d7a047a63d4f358401ba14edbe7ae75853efb926363557abe896e917a35c6e1,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723653531871374687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0090111a9078cd7d7114e8e41eba8b0e3e9244a6d56c800001d55c647de047,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723653502868507192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0feebc4c91acc20973f940c45d9b14cd44c58400f983e72d31ca4be3ec4fd4b1,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723653501865848163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2443fafb925cc387eea7c3e1f71a41139be3161d3ba5fde8e40940fb2d07970b,PodSandboxId:e2479ec996bb180972116be2f16961d9414ef84345e1873b2e61fe87616f6fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723653491125823576,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981eb8296cdeb6d40401b0a529c6358f12551effc26a6a2c5217c4bcd27779ce,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723653490860603676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31cdbd2a724ef44a5f78908dc3852ec9665db36cf9096de1f2e03f97d304b3,PodSandboxId:69b675c5debdafe5c79208c06321cddca332e097a71edf3f8913724a3cefd86d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723653468195833639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6eda7162bf969e95f0578138dd8c6ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4f3f03c5946821483db35d82adadf94e716c80acefdfa9b86eeca5126ebdea,PodSandboxId:d58a265d2473cd71dbd422a2a7066f73f19e42e351c0631f89110b23ca227b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723653458910000622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:71c507d68d37b6072cf0b51abc2fff7f57582c574a8ec265020f3676b0d5682f,PodSandboxId:fd01497642c1d80c907572a4d3306fec7914bdb073b6a4bd0de2d777fa5d4958,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723653457889718585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e4820d9
35853d422990adfe150efcf30cf4f9e5d613b73f919609928c16df7,PodSandboxId:749b6336be4d88594fdf5f67a1f64f8fe9b307a1d090b2511b034dd05ce413b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653457839833373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047dd2746b2ff4c8d2e079bf9e0be2e3f51cb4e115f58578ac5fc150d0b5ec89,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723653457705328500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d453751eb78a43af3188f0c9f5c0f9ded6beb22938705c7c95989b7681bc2e,PodSandboxId:14b128d6cb5027649ee08e04f38180e670b5fb57031cb53668b1f942bd4245f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723653457660851153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f82521456895dcd96d833472a98b47f70324216f760e52a3f5d261531298f,PodSandboxId:6e9c89800b459955c596655cc3cee47f63fd440204b88153673e89ad5eb175f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723653457646958209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1bda5de444ee7b1f76b21acfc57a04e9f13279c7d1c868858a723a1af6d5b0,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723653457539829908,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f031e182fbc1d4e970b42cad69f5b0b5bd9c992b61b42337fd35916e56ef8579,PodSandboxId:9c9eb56944555998bd25081c57daf5bf25e04dcac2037f576690941fd2f65ae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653454561571505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723652958530125849,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778224096082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778200790933,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723652766365339973,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723652762447302359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723652750804163125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723652750785390188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25d9ca55-f23f-440b-863c-fda2955a20ef name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.688819379Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37405cf5-6d4d-44bd-9c73-9c7c2c3a60ee name=/runtime.v1.RuntimeService/Version
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.688904214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37405cf5-6d4d-44bd-9c73-9c7c2c3a60ee name=/runtime.v1.RuntimeService/Version
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.690154184Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96db2ceb-602c-4434-80ae-a611461d0165 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.690688252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653653690663697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96db2ceb-602c-4434-80ae-a611461d0165 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.691422484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=297a67a7-3f5d-4c95-b05c-35e9a636c7fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.691481591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=297a67a7-3f5d-4c95-b05c-35e9a636c7fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.691905804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d7a047a63d4f358401ba14edbe7ae75853efb926363557abe896e917a35c6e1,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723653531871374687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0090111a9078cd7d7114e8e41eba8b0e3e9244a6d56c800001d55c647de047,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723653502868507192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0feebc4c91acc20973f940c45d9b14cd44c58400f983e72d31ca4be3ec4fd4b1,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723653501865848163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2443fafb925cc387eea7c3e1f71a41139be3161d3ba5fde8e40940fb2d07970b,PodSandboxId:e2479ec996bb180972116be2f16961d9414ef84345e1873b2e61fe87616f6fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723653491125823576,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981eb8296cdeb6d40401b0a529c6358f12551effc26a6a2c5217c4bcd27779ce,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723653490860603676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31cdbd2a724ef44a5f78908dc3852ec9665db36cf9096de1f2e03f97d304b3,PodSandboxId:69b675c5debdafe5c79208c06321cddca332e097a71edf3f8913724a3cefd86d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723653468195833639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6eda7162bf969e95f0578138dd8c6ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4f3f03c5946821483db35d82adadf94e716c80acefdfa9b86eeca5126ebdea,PodSandboxId:d58a265d2473cd71dbd422a2a7066f73f19e42e351c0631f89110b23ca227b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723653458910000622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:71c507d68d37b6072cf0b51abc2fff7f57582c574a8ec265020f3676b0d5682f,PodSandboxId:fd01497642c1d80c907572a4d3306fec7914bdb073b6a4bd0de2d777fa5d4958,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723653457889718585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e4820d9
35853d422990adfe150efcf30cf4f9e5d613b73f919609928c16df7,PodSandboxId:749b6336be4d88594fdf5f67a1f64f8fe9b307a1d090b2511b034dd05ce413b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653457839833373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047dd2746b2ff4c8d2e079bf9e0be2e3f51cb4e115f58578ac5fc150d0b5ec89,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723653457705328500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d453751eb78a43af3188f0c9f5c0f9ded6beb22938705c7c95989b7681bc2e,PodSandboxId:14b128d6cb5027649ee08e04f38180e670b5fb57031cb53668b1f942bd4245f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723653457660851153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f82521456895dcd96d833472a98b47f70324216f760e52a3f5d261531298f,PodSandboxId:6e9c89800b459955c596655cc3cee47f63fd440204b88153673e89ad5eb175f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723653457646958209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1bda5de444ee7b1f76b21acfc57a04e9f13279c7d1c868858a723a1af6d5b0,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723653457539829908,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f031e182fbc1d4e970b42cad69f5b0b5bd9c992b61b42337fd35916e56ef8579,PodSandboxId:9c9eb56944555998bd25081c57daf5bf25e04dcac2037f576690941fd2f65ae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653454561571505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723652958530125849,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778224096082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778200790933,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723652766365339973,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723652762447302359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723652750804163125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723652750785390188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=297a67a7-3f5d-4c95-b05c-35e9a636c7fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.851093936Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a1bfddb-1638-4c42-a2d7-eae139140124 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.851434824Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e2479ec996bb180972116be2f16961d9414ef84345e1873b2e61fe87616f6fcc,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-rq7wd,Uid:1cd22b55-7981-4a29-8365-557fc17a8ae1,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723653491016812684,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T16:29:14.551104869Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69b675c5debdafe5c79208c06321cddca332e097a71edf3f8913724a3cefd86d,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-597780,Uid:c6eda7162bf969e95f0578138dd8c6ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1723653468112083216,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6eda7162bf969e95f0578138dd8c6ad,},Annotations:map[string]string{kubernetes.io/config.hash: c6eda7162bf969e95f0578138dd8c6ad,kubernetes.io/config.seen: 2024-08-14T16:37:33.524320993Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:749b6336be4d88594fdf5f67a1f64f8fe9b307a1d090b2511b034dd05ce413b6,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-28k2m,Uid:ec3725c1-3e21-49b0-9caf-922ef1928ed8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723653457304596963,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08
-14T16:26:17.629268633Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d58a265d2473cd71dbd422a2a7066f73f19e42e351c0631f89110b23ca227b6c,Metadata:&PodSandboxMetadata{Name:kube-proxy-79txl,Uid:ea48ab09-60d5-4133-accc-f3fd69a50c5d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723653457263580665,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T16:26:01.364825424Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-597780,Uid:f561a4998ad7d50b7600c5793dffc8dc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:17236534572622
02189,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f561a4998ad7d50b7600c5793dffc8dc,kubernetes.io/config.seen: 2024-08-14T16:25:59.801989001Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-597780,Uid:f9d9336ca03d755bb866a3122f131c5c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723653457261931899,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,tier: control-plane,},Annotations:map[string]string{ku
beadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.4:8443,kubernetes.io/config.hash: f9d9336ca03d755bb866a3122f131c5c,kubernetes.io/config.seen: 2024-08-14T16:25:59.801987597Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:14b128d6cb5027649ee08e04f38180e670b5fb57031cb53668b1f942bd4245f6,Metadata:&PodSandboxMetadata{Name:etcd-ha-597780,Uid:73a9cba43895665a491de601c899e0bc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723653457258004781,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.4:2379,kubernetes.io/config.hash: 73a9cba43895665a491de601c899e0bc,kubernetes.io/config.seen: 2024-08-14T16:25:59.801984128Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbo
x{Id:fd01497642c1d80c907572a4d3306fec7914bdb073b6a4bd0de2d777fa5d4958,Metadata:&PodSandboxMetadata{Name:kindnet-zm75h,Uid:1e5eabaf-5973-4658-b12b-f7faf67b8af7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723653457257476966,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T16:26:01.377440931Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e9c89800b459955c596655cc3cee47f63fd440204b88153673e89ad5eb175f6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-597780,Uid:557e39ea39f4993c51b28b9eeb9a1dd9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723653457254664318,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.
name: POD,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 557e39ea39f4993c51b28b9eeb9a1dd9,kubernetes.io/config.seen: 2024-08-14T16:25:59.801990379Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9939439d-cddd-4505-b554-b72f749269fd,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723653457254663688,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"
v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-14T16:26:17.636848775Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9c9eb56944555998bd25081c57daf5bf25e04dcac2037f576690941fd2f65ae0,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-kc84b,Uid:3a483f17-cab5-4090-abc6-808d84397a8a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723653454402316687,Label
s:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T16:26:17.635333139Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5a1bfddb-1638-4c42-a2d7-eae139140124 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.852595076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b16c2e12-85a3-4af5-a6c5-48ee97980614 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.852677218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b16c2e12-85a3-4af5-a6c5-48ee97980614 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:40:53 ha-597780 crio[3567]: time="2024-08-14 16:40:53.853672302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d7a047a63d4f358401ba14edbe7ae75853efb926363557abe896e917a35c6e1,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723653531871374687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0090111a9078cd7d7114e8e41eba8b0e3e9244a6d56c800001d55c647de047,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723653502868507192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0feebc4c91acc20973f940c45d9b14cd44c58400f983e72d31ca4be3ec4fd4b1,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723653501865848163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2443fafb925cc387eea7c3e1f71a41139be3161d3ba5fde8e40940fb2d07970b,PodSandboxId:e2479ec996bb180972116be2f16961d9414ef84345e1873b2e61fe87616f6fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723653491125823576,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31cdbd2a724ef44a5f78908dc3852ec9665db36cf9096de1f2e03f97d304b3,PodSandboxId:69b675c5debdafe5c79208c06321cddca332e097a71edf3f8913724a3cefd86d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723653468195833639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6eda7162bf969e95f0578138dd8c6ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4f3f03c5946821483db35d82adadf94e716c80acefdfa9b86eeca5126ebdea,PodSandboxId:d58a265d2473cd71dbd422a2a7066f73f19e42e351c0631f89110b23ca227b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723653458910000622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:71c507d68d37b6072cf0b51abc2fff7f57582c574a8ec265020f3676b0d5682f,PodSandboxId:fd01497642c1d80c907572a4d3306fec7914bdb073b6a4bd0de2d777fa5d4958,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723653457889718585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:96e4820d935853d422990adfe150efcf30cf4f9e5d613b73f919609928c16df7,PodSandboxId:749b6336be4d88594fdf5f67a1f64f8fe9b307a1d090b2511b034dd05ce413b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653457839833373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d453751eb78a43af3188f0c9f5c0f9ded6beb22938705c7c95989b7681bc2e,PodSandboxId:14b128d6cb5027649ee08e04f38180e670b5fb57031cb53668b1f942bd4245f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723653457660851153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f82521456895dcd96d833472a98b47f70324216f760e52a3f5d261531298f,PodSandboxId:6e9c89800b459955c596655cc3cee47f63fd440204b88153673e89ad5eb175f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723653457646958209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f031e182fbc1d4e970b42cad69f5b0b5bd9c992b61b42337fd35916e56ef8579,PodSandboxId:9c9eb56944555998bd25081c57daf5bf25e04dcac2037f576690941fd2f65ae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653454561571505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\
":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b16c2e12-85a3-4af5-a6c5-48ee97980614 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d7a047a63d4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       4                   352ccf859fcf6       storage-provisioner
	0b0090111a907       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago       Running             kube-apiserver            3                   26c626804c784       kube-apiserver-ha-597780
	0feebc4c91acc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago       Running             kube-controller-manager   2                   c127b102483e0       kube-controller-manager-ha-597780
	2443fafb925cc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago       Running             busybox                   1                   e2479ec996bb1       busybox-7dff88458-rq7wd
	981eb8296cdeb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Exited              storage-provisioner       3                   352ccf859fcf6       storage-provisioner
	1d31cdbd2a724       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      3 minutes ago       Running             kube-vip                  0                   69b675c5debda       kube-vip-ha-597780
	bd4f3f03c5946       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      3 minutes ago       Running             kube-proxy                1                   d58a265d2473c       kube-proxy-79txl
	71c507d68d37b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago       Running             kindnet-cni               1                   fd01497642c1d       kindnet-zm75h
	96e4820d93585       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   749b6336be4d8       coredns-6f6b679f8f-28k2m
	047dd2746b2ff       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      3 minutes ago       Exited              kube-controller-manager   1                   c127b102483e0       kube-controller-manager-ha-597780
	78d453751eb78       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago       Running             etcd                      1                   14b128d6cb502       etcd-ha-597780
	804f825214568       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      3 minutes ago       Running             kube-scheduler            1                   6e9c89800b459       kube-scheduler-ha-597780
	bd1bda5de444e       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      3 minutes ago       Exited              kube-apiserver            2                   26c626804c784       kube-apiserver-ha-597780
	f031e182fbc1d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   9c9eb56944555       coredns-6f6b679f8f-kc84b
	e27a742b157d3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago      Exited              busybox                   0                   24fc5367bc64f       busybox-7dff88458-rq7wd
	422bd8a4c6f73       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago      Exited              coredns                   0                   103da86315438       coredns-6f6b679f8f-28k2m
	e6f5722727045       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago      Exited              coredns                   0                   6b4d32c83825a       coredns-6f6b679f8f-kc84b
	9383508aacb47       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    14 minutes ago      Exited              kindnet-cni               0                   7c496d8d976b0       kindnet-zm75h
	37ced76497679       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      14 minutes ago      Exited              kube-proxy                0                   403a7dadd2cf1       kube-proxy-79txl
	be37bacc58210       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      15 minutes ago      Exited              etcd                      0                   c3627f4eb5471       etcd-ha-597780
	9049789221ccd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      15 minutes ago      Exited              kube-scheduler            0                   dfba8d4d791ac       kube-scheduler-ha-597780
	
	
	==> coredns [422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c] <==
	[INFO] 10.244.2.2:36168 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009915s
	[INFO] 10.244.0.4:54131 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070841s
	[INFO] 10.244.0.4:55620 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091367s
	[INFO] 10.244.0.4:43235 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075669s
	[INFO] 10.244.1.2:41689 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119685s
	[INFO] 10.244.1.2:59902 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124326s
	[INFO] 10.244.2.2:40926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109376s
	[INFO] 10.244.2.2:51410 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177337s
	[INFO] 10.244.0.4:34296 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121681s
	[INFO] 10.244.1.2:46660 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107008s
	[INFO] 10.244.1.2:58922 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127256s
	[INFO] 10.244.1.2:50299 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110499s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1949&timeout=5m57s&timeoutSeconds=357&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1949&timeout=6m49s&timeoutSeconds=409&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	
	
	==> coredns [96e4820d935853d422990adfe150efcf30cf4f9e5d613b73f919609928c16df7] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d] <==
	[INFO] 10.244.2.2:34873 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084958s
	[INFO] 10.244.0.4:38163 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100276s
	[INFO] 10.244.0.4:57638 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133846s
	[INFO] 10.244.0.4:41879 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000064694s
	[INFO] 10.244.1.2:53124 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175486s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1949&timeout=7m31s&timeoutSeconds=451&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1946&timeout=8m51s&timeoutSeconds=531&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1949&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1946": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1946": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[260799391]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 16:35:46.732) (total time: 12311ms):
	Trace[260799391]: ---"Objects listed" error:Unauthorized 12311ms (16:35:59.044)
	Trace[260799391]: [12.311906254s] [12.311906254s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f031e182fbc1d4e970b42cad69f5b0b5bd9c992b61b42337fd35916e56ef8579] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[238329841]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 16:37:45.083) (total time: 10002ms):
	Trace[238329841]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:37:55.084)
	Trace[238329841]: [10.002209092s] [10.002209092s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55108->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1162545651]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 16:37:51.873) (total time: 10440ms):
	Trace[1162545651]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55108->10.96.0.1:443: read: connection reset by peer 10439ms (16:38:02.313)
	Trace[1162545651]: [10.440179041s] [10.440179041s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55108->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-597780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T16_26_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:25:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:40:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:38:25 +0000   Wed, 14 Aug 2024 16:25:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:38:25 +0000   Wed, 14 Aug 2024 16:25:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:38:25 +0000   Wed, 14 Aug 2024 16:25:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:38:25 +0000   Wed, 14 Aug 2024 16:26:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-597780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 380f2e1fef9b4a7ba6d1d939cb1bae1a
	  System UUID:                380f2e1f-ef9b-4a7b-a6d1-d939cb1bae1a
	  Boot ID:                    aa55ed43-2220-4096-a571-51cd5b70ed86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rq7wd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-6f6b679f8f-28k2m             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-6f6b679f8f-kc84b             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-597780                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-zm75h                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-597780             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-597780    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-79txl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-597780             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-597780                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 2m29s                 kube-proxy       
	  Normal   Starting                 14m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  14m                   kubelet          Node ha-597780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                   kubelet          Node ha-597780 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m                   kubelet          Node ha-597780 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           14m                   node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Normal   NodeReady                14m                   kubelet          Node ha-597780 status is now: NodeReady
	  Normal   RegisteredNode           13m                   node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Normal   RegisteredNode           12m                   node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Warning  ContainerGCFailed        3m55s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m42s (x2 over 4m7s)  kubelet          Node ha-597780 status is now: NodeNotReady
	  Normal   RegisteredNode           2m33s                 node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Normal   RegisteredNode           2m26s                 node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Normal   RegisteredNode           36s                   node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	
	
	Name:               ha-597780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T16_27_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:27:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:40:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:39:36 +0000   Wed, 14 Aug 2024 16:38:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:39:36 +0000   Wed, 14 Aug 2024 16:38:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:39:36 +0000   Wed, 14 Aug 2024 16:38:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:39:36 +0000   Wed, 14 Aug 2024 16:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-597780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a36bc81f5b549f48c64d8093b0c45f0
	  System UUID:                2a36bc81-f5b5-49f4-8c64-d8093b0c45f0
	  Boot ID:                    40b81862-df95-474f-9bec-f0356bc47e40
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w9lh2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-597780-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-c8f8r                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-597780-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-597780-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-4q2dq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-597780-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-597780-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m2s                   kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-597780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-597780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-597780-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  NodeNotReady             9m45s                  node-controller  Node ha-597780-m02 status is now: NodeNotReady
	  Normal  Starting                 2m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m58s (x8 over 2m58s)  kubelet          Node ha-597780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s (x8 over 2m58s)  kubelet          Node ha-597780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s (x7 over 2m58s)  kubelet          Node ha-597780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m33s                  node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  RegisteredNode           2m26s                  node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  RegisteredNode           36s                    node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	
	
	Name:               ha-597780-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T16_28_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:28:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:40:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:40:32 +0000   Wed, 14 Aug 2024 16:40:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:40:32 +0000   Wed, 14 Aug 2024 16:40:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:40:32 +0000   Wed, 14 Aug 2024 16:40:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:40:32 +0000   Wed, 14 Aug 2024 16:40:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    ha-597780-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6ad778cd276b4853bc1e6d49295cbd2e
	  System UUID:                6ad778cd-276b-4853-bc1e-6d49295cbd2e
	  Boot ID:                    09f46f81-2efc-4069-8343-2353a04fd797
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-27k42                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-597780-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-2p7zj                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-597780-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-597780-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-97tjj                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-597780-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-597780-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 35s                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-597780-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-597780-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-597780-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                node-controller  Node ha-597780-m03 event: Registered Node ha-597780-m03 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-597780-m03 event: Registered Node ha-597780-m03 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-597780-m03 event: Registered Node ha-597780-m03 in Controller
	  Normal   RegisteredNode           2m33s              node-controller  Node ha-597780-m03 event: Registered Node ha-597780-m03 in Controller
	  Normal   RegisteredNode           2m26s              node-controller  Node ha-597780-m03 event: Registered Node ha-597780-m03 in Controller
	  Normal   NodeNotReady             113s               node-controller  Node ha-597780-m03 status is now: NodeNotReady
	  Normal   Starting                 53s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 53s (x2 over 53s)  kubelet          Node ha-597780-m03 has been rebooted, boot id: 09f46f81-2efc-4069-8343-2353a04fd797
	  Normal   NodeHasSufficientMemory  53s (x3 over 53s)  kubelet          Node ha-597780-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    53s (x3 over 53s)  kubelet          Node ha-597780-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     53s (x3 over 53s)  kubelet          Node ha-597780-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             53s                kubelet          Node ha-597780-m03 status is now: NodeNotReady
	  Normal   NodeReady                53s                kubelet          Node ha-597780-m03 status is now: NodeReady
	  Normal   RegisteredNode           36s                node-controller  Node ha-597780-m03 event: Registered Node ha-597780-m03 in Controller
	
	
	Name:               ha-597780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T16_29_55_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:29:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:40:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:40:46 +0000   Wed, 14 Aug 2024 16:40:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:40:46 +0000   Wed, 14 Aug 2024 16:40:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:40:46 +0000   Wed, 14 Aug 2024 16:40:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:40:46 +0000   Wed, 14 Aug 2024 16:40:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.209
	  Hostname:    ha-597780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0fa932f445844ff7a66a64ac6cdf169b
	  System UUID:                0fa932f4-4584-4ff7-a66a-64ac6cdf169b
	  Boot ID:                    b6117a86-4071-4f3c-880b-c8232cde1ee3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5x5s7       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-proxy-bmf62    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-597780-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-597780-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-597780-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           10m                node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal   RegisteredNode           2m33s              node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal   RegisteredNode           2m26s              node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal   NodeNotReady             113s               node-controller  Node ha-597780-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           36s                node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                 kubelet          Node ha-597780-m04 has been rebooted, boot id: b6117a86-4071-4f3c-880b-c8232cde1ee3
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-597780-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-597780-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-597780-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s                 kubelet          Node ha-597780-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.613825] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.065926] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069239] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.173403] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.130531] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.250569] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +3.824868] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +3.756438] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.057963] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.054111] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.086455] kauditd_printk_skb: 79 callbacks suppressed
	[Aug14 16:26] kauditd_printk_skb: 62 callbacks suppressed
	[Aug14 16:27] kauditd_printk_skb: 26 callbacks suppressed
	[Aug14 16:37] systemd-fstab-generator[3485]: Ignoring "noauto" option for root device
	[  +0.144486] systemd-fstab-generator[3497]: Ignoring "noauto" option for root device
	[  +0.169121] systemd-fstab-generator[3511]: Ignoring "noauto" option for root device
	[  +0.133515] systemd-fstab-generator[3523]: Ignoring "noauto" option for root device
	[  +0.275861] systemd-fstab-generator[3552]: Ignoring "noauto" option for root device
	[  +0.759588] systemd-fstab-generator[3654]: Ignoring "noauto" option for root device
	[  +3.681325] kauditd_printk_skb: 132 callbacks suppressed
	[ +10.900447] kauditd_printk_skb: 88 callbacks suppressed
	[Aug14 16:38] kauditd_printk_skb: 6 callbacks suppressed
	[ +14.179034] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [78d453751eb78a43af3188f0c9f5c0f9ded6beb22938705c7c95989b7681bc2e] <==
	{"level":"warn","ts":"2024-08-14T16:39:55.826965Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:39:55.918985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7ab0973fa604e492","from":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-14T16:39:56.773965Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.167:2380/version","remote-member-id":"b8cd3528b7e3c388","error":"Get \"https://192.168.39.167:2380/version\": dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:39:56.774089Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b8cd3528b7e3c388","error":"Get \"https://192.168.39.167:2380/version\": dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:39:58.629473Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b8cd3528b7e3c388","rtt":"0s","error":"dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:39:58.629542Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b8cd3528b7e3c388","rtt":"0s","error":"dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:00.776555Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.167:2380/version","remote-member-id":"b8cd3528b7e3c388","error":"Get \"https://192.168.39.167:2380/version\": dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:00.776770Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b8cd3528b7e3c388","error":"Get \"https://192.168.39.167:2380/version\": dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:03.630469Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b8cd3528b7e3c388","rtt":"0s","error":"dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:03.630637Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b8cd3528b7e3c388","rtt":"0s","error":"dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:04.778730Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.167:2380/version","remote-member-id":"b8cd3528b7e3c388","error":"Get \"https://192.168.39.167:2380/version\": dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:04.778782Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b8cd3528b7e3c388","error":"Get \"https://192.168.39.167:2380/version\": dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:08.630711Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b8cd3528b7e3c388","rtt":"0s","error":"dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:08.630797Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b8cd3528b7e3c388","rtt":"0s","error":"dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:08.780365Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.167:2380/version","remote-member-id":"b8cd3528b7e3c388","error":"Get \"https://192.168.39.167:2380/version\": dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:08.780491Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b8cd3528b7e3c388","error":"Get \"https://192.168.39.167:2380/version\": dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-14T16:40:11.230331Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:40:11.230498Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:40:11.232813Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:40:11.255487Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7ab0973fa604e492","to":"b8cd3528b7e3c388","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-14T16:40:11.255602Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:40:11.256282Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7ab0973fa604e492","to":"b8cd3528b7e3c388","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-14T16:40:11.256366Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"warn","ts":"2024-08-14T16:40:13.631099Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b8cd3528b7e3c388","rtt":"0s","error":"dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:13.631175Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b8cd3528b7e3c388","rtt":"0s","error":"dial tcp 192.168.39.167:2380: connect: connection refused"}
	
	
	==> etcd [be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905] <==
	{"level":"warn","ts":"2024-08-14T16:36:00.807914Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T16:36:00.012535Z","time spent":"795.373209ms","remote":"127.0.0.1:36734","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:500 "}
	2024/08/14 16:36:00 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-14T16:36:00.860655Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T16:36:00.860709Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-14T16:36:00.862172Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"7ab0973fa604e492","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-14T16:36:00.862375Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862408Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862434Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862528Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862605Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862664Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862695Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862719Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.862767Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.862818Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.862908Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.862966Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.863031Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.863066Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.866691Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"warn","ts":"2024-08-14T16:36:00.866777Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.84983465s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-14T16:36:00.866827Z","caller":"traceutil/trace.go:171","msg":"trace[1941826792] range","detail":"{range_begin:; range_end:; }","duration":"8.849910331s","start":"2024-08-14T16:35:52.016908Z","end":"2024-08-14T16:36:00.866818Z","steps":["trace[1941826792] 'agreement among raft nodes before linearized reading'  (duration: 8.849832588s)"],"step_count":1}
	{"level":"error","ts":"2024-08-14T16:36:00.866862Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-14T16:36:00.866938Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2024-08-14T16:36:00.866971Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-597780","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.4:2380"],"advertise-client-urls":["https://192.168.39.4:2379"]}
	
	
	==> kernel <==
	 16:40:54 up 15 min,  0 users,  load average: 0.50, 0.41, 0.29
	Linux ha-597780 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [71c507d68d37b6072cf0b51abc2fff7f57582c574a8ec265020f3676b0d5682f] <==
	I0814 16:40:18.871555       1 main.go:299] handling current node
	I0814 16:40:28.874641       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:40:28.874739       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	I0814 16:40:28.874918       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:40:28.874939       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:40:28.874986       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:40:28.875004       1 main.go:299] handling current node
	I0814 16:40:28.875029       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:40:28.875044       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:40:38.870981       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:40:38.871043       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:40:38.871187       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:40:38.871254       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	I0814 16:40:38.871338       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:40:38.871358       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:40:38.871412       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:40:38.871427       1 main.go:299] handling current node
	I0814 16:40:48.871462       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:40:48.871572       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:40:48.871748       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:40:48.871773       1 main.go:299] handling current node
	I0814 16:40:48.871795       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:40:48.871811       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:40:48.871870       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:40:48.871888       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1] <==
	I0814 16:35:37.358034       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:35:37.358193       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:35:37.358403       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:35:37.358434       1 main.go:299] handling current node
	I0814 16:35:37.358478       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:35:37.358495       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:35:37.358573       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:35:37.358602       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	E0814 16:35:44.073856       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1946&timeout=5m12s&timeoutSeconds=312&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0814 16:35:47.360336       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:35:47.360394       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:35:47.360598       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:35:47.360620       1 main.go:299] handling current node
	I0814 16:35:47.360632       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:35:47.360637       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:35:47.360690       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:35:47.360706       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	I0814 16:35:57.358160       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:35:57.358253       1 main.go:299] handling current node
	I0814 16:35:57.358272       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:35:57.358278       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:35:57.358453       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:35:57.358471       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	I0814 16:35:57.358523       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:35:57.358540       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0b0090111a9078cd7d7114e8e41eba8b0e3e9244a6d56c800001d55c647de047] <==
	I0814 16:38:24.627267       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0814 16:38:24.715866       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 16:38:24.722027       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0814 16:38:24.722809       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0814 16:38:24.722976       1 shared_informer.go:320] Caches are synced for configmaps
	I0814 16:38:24.723003       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0814 16:38:24.723079       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0814 16:38:24.723705       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0814 16:38:24.723106       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0814 16:38:24.726296       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0814 16:38:24.727501       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0814 16:38:24.727563       1 aggregator.go:171] initial CRD sync complete...
	I0814 16:38:24.727600       1 autoregister_controller.go:144] Starting autoregister controller
	I0814 16:38:24.727623       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0814 16:38:24.727645       1 cache.go:39] Caches are synced for autoregister controller
	W0814 16:38:24.734580       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.167 192.168.39.225]
	I0814 16:38:24.736082       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 16:38:24.736141       1 policy_source.go:224] refreshing policies
	I0814 16:38:24.804969       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 16:38:24.836589       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 16:38:24.844493       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0814 16:38:24.847453       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0814 16:38:25.622949       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0814 16:38:26.066028       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.167 192.168.39.225 192.168.39.4]
	W0814 16:38:36.061733       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.225 192.168.39.4]
	
	
	==> kube-apiserver [bd1bda5de444ee7b1f76b21acfc57a04e9f13279c7d1c868858a723a1af6d5b0] <==
	I0814 16:37:37.970015       1 options.go:228] external host was not specified, using 192.168.39.4
	I0814 16:37:38.025642       1 server.go:142] Version: v1.31.0
	I0814 16:37:38.025692       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:37:38.820475       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0814 16:37:38.834424       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 16:37:38.845107       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0814 16:37:38.845262       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0814 16:37:38.845923       1 instance.go:232] Using reconciler: lease
	W0814 16:37:58.820468       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0814 16:37:58.820640       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0814 16:37:58.848815       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [047dd2746b2ff4c8d2e079bf9e0be2e3f51cb4e115f58578ac5fc150d0b5ec89] <==
	I0814 16:37:38.899600       1 serving.go:386] Generated self-signed cert in-memory
	I0814 16:37:39.390494       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0814 16:37:39.390528       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:37:39.392316       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0814 16:37:39.392452       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0814 16:37:39.392944       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 16:37:39.393012       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0814 16:37:59.855358       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.4:8443/healthz\": dial tcp 192.168.39.4:8443: connect: connection refused"
	
	
	==> kube-controller-manager [0feebc4c91acc20973f940c45d9b14cd44c58400f983e72d31ca4be3ec4fd4b1] <==
	I0814 16:39:01.038713       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:39:01.050109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m03"
	I0814 16:39:01.087385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.856241ms"
	I0814 16:39:01.087701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="184.816µs"
	I0814 16:39:03.226585       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m03"
	I0814 16:39:06.087394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m02"
	I0814 16:39:06.275864       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m03"
	I0814 16:39:13.304154       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:39:16.356744       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:39:27.973956       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="19.959273ms"
	I0814 16:39:27.974293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="96.387µs"
	I0814 16:39:36.829381       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m02"
	I0814 16:40:01.538661       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m03"
	I0814 16:40:01.563961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m03"
	I0814 16:40:02.491778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.505µs"
	I0814 16:40:03.233540       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m03"
	I0814 16:40:18.277737       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:40:18.343148       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:40:21.839748       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.040665ms"
	I0814 16:40:21.839953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.772µs"
	I0814 16:40:32.268989       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m03"
	I0814 16:40:46.539697       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:40:46.540169       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-597780-m04"
	I0814 16:40:46.557910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:40:48.253568       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	
	
	==> kube-proxy [37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb] <==
	E0814 16:34:49.930498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:34:49.930596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:34:49.930810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:34:49.932409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:34:49.932738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:34:56.073720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:34:56.074147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:34:56.073984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:34:56.074379       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:34:56.074051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:34:56.074488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:05.289601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:05.289782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:05.289656       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:05.289953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:08.360934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:08.361036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:26.794516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:26.794584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:26.794668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:26.794698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:26.794803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:26.794910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:57.514049       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:57.514432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [bd4f3f03c5946821483db35d82adadf94e716c80acefdfa9b86eeca5126ebdea] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 16:37:41.960693       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-597780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0814 16:37:45.034507       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-597780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0814 16:37:48.104646       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-597780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0814 16:37:54.249582       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-597780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0814 16:38:03.465892       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-597780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0814 16:38:24.968712       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-597780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0814 16:38:24.968820       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0814 16:38:24.968922       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 16:38:25.017893       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 16:38:25.018012       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 16:38:25.018055       1 server_linux.go:169] "Using iptables Proxier"
	I0814 16:38:25.021020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 16:38:25.021559       1 server.go:483] "Version info" version="v1.31.0"
	I0814 16:38:25.022269       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:38:25.025576       1 config.go:197] "Starting service config controller"
	I0814 16:38:25.025660       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 16:38:25.025805       1 config.go:104] "Starting endpoint slice config controller"
	I0814 16:38:25.025826       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 16:38:25.026869       1 config.go:326] "Starting node config controller"
	I0814 16:38:25.026892       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 16:38:25.126564       1 shared_informer.go:320] Caches are synced for service config
	I0814 16:38:25.126777       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 16:38:25.127915       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [804f82521456895dcd96d833472a98b47f70324216f760e52a3f5d261531298f] <==
	W0814 16:38:16.332740       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.4:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:16.332804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.4:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:16.552969       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.4:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:16.553039       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.4:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:16.685743       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.4:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:16.685914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.4:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:16.841534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.4:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:16.841641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.4:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:17.221011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.4:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:17.221115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.4:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:18.663352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.4:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:18.663469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.4:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:19.791843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.4:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:19.791894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.4:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:19.956861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.4:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:19.956984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.4:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:20.271610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.4:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:20.271672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.4:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:21.087552       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.4:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:21.087676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.4:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:21.190590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.4:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:21.190660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.4:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:21.931308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.4:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:21.931369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.4:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	I0814 16:38:33.364665       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e] <==
	E0814 16:29:14.513586       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d61c6e28-3a9c-47b5-ad97-6d1c77c30857(default/busybox-7dff88458-w9lh2) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-w9lh2"
	E0814 16:29:14.513669       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9lh2\": pod busybox-7dff88458-w9lh2 is already assigned to node \"ha-597780-m02\"" pod="default/busybox-7dff88458-w9lh2"
	I0814 16:29:14.513886       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-w9lh2" node="ha-597780-m02"
	E0814 16:29:14.544849       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-27k42\": pod busybox-7dff88458-27k42 is already assigned to node \"ha-597780-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-27k42" node="ha-597780-m03"
	E0814 16:29:14.544959       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-27k42\": pod busybox-7dff88458-27k42 is already assigned to node \"ha-597780-m03\"" pod="default/busybox-7dff88458-27k42"
	E0814 16:29:14.545719       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rq7wd\": pod busybox-7dff88458-rq7wd is already assigned to node \"ha-597780\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rq7wd" node="ha-597780"
	E0814 16:29:14.557325       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rq7wd\": pod busybox-7dff88458-rq7wd is already assigned to node \"ha-597780\"" pod="default/busybox-7dff88458-rq7wd"
	E0814 16:29:54.657005       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5x5s7\": pod kindnet-5x5s7 is already assigned to node \"ha-597780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5x5s7" node="ha-597780-m04"
	E0814 16:29:54.657112       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 45af1890-2443-48af-a4f1-38ce0ab0f558(kube-system/kindnet-5x5s7) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5x5s7"
	E0814 16:29:54.657139       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5x5s7\": pod kindnet-5x5s7 is already assigned to node \"ha-597780-m04\"" pod="kube-system/kindnet-5x5s7"
	I0814 16:29:54.657164       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5x5s7" node="ha-597780-m04"
	E0814 16:35:52.111972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0814 16:35:52.252623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0814 16:35:53.591543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0814 16:35:54.659904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0814 16:35:55.135355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0814 16:35:55.551945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0814 16:35:56.194429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0814 16:35:56.452057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0814 16:35:57.575003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0814 16:35:57.601780       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0814 16:35:57.652631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0814 16:35:59.238461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0814 16:36:00.196272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0814 16:36:00.771517       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 14 16:39:20 ha-597780 kubelet[1315]: E0814 16:39:20.082358    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653560081995379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:39:20 ha-597780 kubelet[1315]: E0814 16:39:20.082452    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653560081995379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:39:30 ha-597780 kubelet[1315]: E0814 16:39:30.083511    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653570083285590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:39:30 ha-597780 kubelet[1315]: E0814 16:39:30.083545    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653570083285590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:39:40 ha-597780 kubelet[1315]: E0814 16:39:40.086108    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653580085750377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:39:40 ha-597780 kubelet[1315]: E0814 16:39:40.086142    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653580085750377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:39:50 ha-597780 kubelet[1315]: E0814 16:39:50.089353    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653590088883428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:39:50 ha-597780 kubelet[1315]: E0814 16:39:50.089384    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653590088883428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:39:59 ha-597780 kubelet[1315]: E0814 16:39:59.873100    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 16:39:59 ha-597780 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 16:39:59 ha-597780 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 16:39:59 ha-597780 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 16:39:59 ha-597780 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 16:40:00 ha-597780 kubelet[1315]: E0814 16:40:00.092025    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653600091612486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:40:00 ha-597780 kubelet[1315]: E0814 16:40:00.092153    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653600091612486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:40:10 ha-597780 kubelet[1315]: E0814 16:40:10.093779    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653610093413090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:40:10 ha-597780 kubelet[1315]: E0814 16:40:10.093869    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653610093413090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:40:20 ha-597780 kubelet[1315]: E0814 16:40:20.095271    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653620094870777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:40:20 ha-597780 kubelet[1315]: E0814 16:40:20.095351    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653620094870777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:40:30 ha-597780 kubelet[1315]: E0814 16:40:30.097577    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653630096156391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:40:30 ha-597780 kubelet[1315]: E0814 16:40:30.097628    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653630096156391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:40:40 ha-597780 kubelet[1315]: E0814 16:40:40.100381    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653640099692100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:40:40 ha-597780 kubelet[1315]: E0814 16:40:40.100417    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653640099692100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:40:50 ha-597780 kubelet[1315]: E0814 16:40:50.102715    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653650102274115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:40:50 ha-597780 kubelet[1315]: E0814 16:40:50.102754    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653650102274115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 16:40:53.228796   39883 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19446-13977/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-597780 -n ha-597780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-597780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (417.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 stop -v=7 --alsologtostderr
E0814 16:43:02.588787   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-597780 stop -v=7 --alsologtostderr: exit status 82 (2m0.470264405s)

                                                
                                                
-- stdout --
	* Stopping node "ha-597780-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:41:12.307917   40294 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:41:12.308144   40294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:41:12.308151   40294 out.go:304] Setting ErrFile to fd 2...
	I0814 16:41:12.308155   40294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:41:12.308324   40294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:41:12.308527   40294 out.go:298] Setting JSON to false
	I0814 16:41:12.308599   40294 mustload.go:65] Loading cluster: ha-597780
	I0814 16:41:12.308927   40294 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:41:12.309015   40294 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:41:12.309198   40294 mustload.go:65] Loading cluster: ha-597780
	I0814 16:41:12.309330   40294 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:41:12.309351   40294 stop.go:39] StopHost: ha-597780-m04
	I0814 16:41:12.309714   40294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:41:12.309755   40294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:41:12.324528   40294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33401
	I0814 16:41:12.324973   40294 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:41:12.325610   40294 main.go:141] libmachine: Using API Version  1
	I0814 16:41:12.325648   40294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:41:12.325998   40294 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:41:12.328346   40294 out.go:177] * Stopping node "ha-597780-m04"  ...
	I0814 16:41:12.329568   40294 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 16:41:12.329605   40294 main.go:141] libmachine: (ha-597780-m04) Calling .DriverName
	I0814 16:41:12.329797   40294 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 16:41:12.329822   40294 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHHostname
	I0814 16:41:12.332649   40294 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:41:12.333109   40294 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:40:41 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:41:12.333140   40294 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:41:12.333324   40294 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHPort
	I0814 16:41:12.333506   40294 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHKeyPath
	I0814 16:41:12.333677   40294 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHUsername
	I0814 16:41:12.333839   40294 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m04/id_rsa Username:docker}
	I0814 16:41:12.417781   40294 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 16:41:12.469920   40294 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 16:41:12.521404   40294 main.go:141] libmachine: Stopping "ha-597780-m04"...
	I0814 16:41:12.521436   40294 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:41:12.523024   40294 main.go:141] libmachine: (ha-597780-m04) Calling .Stop
	I0814 16:41:12.526877   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 0/120
	I0814 16:41:13.528234   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 1/120
	I0814 16:41:14.530112   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 2/120
	I0814 16:41:15.531765   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 3/120
	I0814 16:41:16.533232   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 4/120
	I0814 16:41:17.535660   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 5/120
	I0814 16:41:18.538270   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 6/120
	I0814 16:41:19.539854   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 7/120
	I0814 16:41:20.541816   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 8/120
	I0814 16:41:21.544296   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 9/120
	I0814 16:41:22.547060   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 10/120
	I0814 16:41:23.548446   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 11/120
	I0814 16:41:24.549682   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 12/120
	I0814 16:41:25.551422   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 13/120
	I0814 16:41:26.552917   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 14/120
	I0814 16:41:27.554650   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 15/120
	I0814 16:41:28.556645   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 16/120
	I0814 16:41:29.558182   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 17/120
	I0814 16:41:30.559728   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 18/120
	I0814 16:41:31.561078   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 19/120
	I0814 16:41:32.562431   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 20/120
	I0814 16:41:33.563826   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 21/120
	I0814 16:41:34.565866   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 22/120
	I0814 16:41:35.567318   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 23/120
	I0814 16:41:36.568796   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 24/120
	I0814 16:41:37.570818   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 25/120
	I0814 16:41:38.572092   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 26/120
	I0814 16:41:39.573700   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 27/120
	I0814 16:41:40.574944   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 28/120
	I0814 16:41:41.576303   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 29/120
	I0814 16:41:42.577816   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 30/120
	I0814 16:41:43.579130   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 31/120
	I0814 16:41:44.580544   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 32/120
	I0814 16:41:45.581962   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 33/120
	I0814 16:41:46.583247   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 34/120
	I0814 16:41:47.584513   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 35/120
	I0814 16:41:48.586117   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 36/120
	I0814 16:41:49.587727   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 37/120
	I0814 16:41:50.589921   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 38/120
	I0814 16:41:51.591346   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 39/120
	I0814 16:41:52.593179   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 40/120
	I0814 16:41:53.594465   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 41/120
	I0814 16:41:54.595945   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 42/120
	I0814 16:41:55.597799   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 43/120
	I0814 16:41:56.599115   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 44/120
	I0814 16:41:57.601349   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 45/120
	I0814 16:41:58.602648   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 46/120
	I0814 16:41:59.604424   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 47/120
	I0814 16:42:00.605851   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 48/120
	I0814 16:42:01.607601   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 49/120
	I0814 16:42:02.609266   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 50/120
	I0814 16:42:03.610848   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 51/120
	I0814 16:42:04.612204   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 52/120
	I0814 16:42:05.613820   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 53/120
	I0814 16:42:06.615194   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 54/120
	I0814 16:42:07.617189   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 55/120
	I0814 16:42:08.618787   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 56/120
	I0814 16:42:09.620183   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 57/120
	I0814 16:42:10.621907   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 58/120
	I0814 16:42:11.623250   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 59/120
	I0814 16:42:12.625419   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 60/120
	I0814 16:42:13.626976   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 61/120
	I0814 16:42:14.628403   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 62/120
	I0814 16:42:15.629830   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 63/120
	I0814 16:42:16.631391   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 64/120
	I0814 16:42:17.633531   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 65/120
	I0814 16:42:18.635476   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 66/120
	I0814 16:42:19.637782   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 67/120
	I0814 16:42:20.639381   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 68/120
	I0814 16:42:21.641451   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 69/120
	I0814 16:42:22.643758   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 70/120
	I0814 16:42:23.645477   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 71/120
	I0814 16:42:24.646993   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 72/120
	I0814 16:42:25.648586   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 73/120
	I0814 16:42:26.650034   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 74/120
	I0814 16:42:27.651892   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 75/120
	I0814 16:42:28.653410   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 76/120
	I0814 16:42:29.655037   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 77/120
	I0814 16:42:30.656814   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 78/120
	I0814 16:42:31.658735   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 79/120
	I0814 16:42:32.660287   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 80/120
	I0814 16:42:33.662421   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 81/120
	I0814 16:42:34.663890   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 82/120
	I0814 16:42:35.665268   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 83/120
	I0814 16:42:36.667559   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 84/120
	I0814 16:42:37.669111   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 85/120
	I0814 16:42:38.670681   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 86/120
	I0814 16:42:39.672179   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 87/120
	I0814 16:42:40.674187   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 88/120
	I0814 16:42:41.675777   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 89/120
	I0814 16:42:42.677947   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 90/120
	I0814 16:42:43.679187   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 91/120
	I0814 16:42:44.680632   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 92/120
	I0814 16:42:45.681968   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 93/120
	I0814 16:42:46.683409   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 94/120
	I0814 16:42:47.685274   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 95/120
	I0814 16:42:48.686730   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 96/120
	I0814 16:42:49.688740   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 97/120
	I0814 16:42:50.690156   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 98/120
	I0814 16:42:51.692079   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 99/120
	I0814 16:42:52.693945   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 100/120
	I0814 16:42:53.695498   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 101/120
	I0814 16:42:54.696698   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 102/120
	I0814 16:42:55.698379   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 103/120
	I0814 16:42:56.700041   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 104/120
	I0814 16:42:57.701849   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 105/120
	I0814 16:42:58.703138   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 106/120
	I0814 16:42:59.704492   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 107/120
	I0814 16:43:00.705928   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 108/120
	I0814 16:43:01.707648   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 109/120
	I0814 16:43:02.709932   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 110/120
	I0814 16:43:03.711802   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 111/120
	I0814 16:43:04.713437   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 112/120
	I0814 16:43:05.715097   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 113/120
	I0814 16:43:06.716582   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 114/120
	I0814 16:43:07.718724   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 115/120
	I0814 16:43:08.720324   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 116/120
	I0814 16:43:09.721963   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 117/120
	I0814 16:43:10.724194   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 118/120
	I0814 16:43:11.725862   40294 main.go:141] libmachine: (ha-597780-m04) Waiting for machine to stop 119/120
	I0814 16:43:12.726849   40294 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0814 16:43:12.726938   40294 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0814 16:43:12.728910   40294 out.go:177] 
	W0814 16:43:12.730139   40294 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0814 16:43:12.730154   40294 out.go:239] * 
	* 
	W0814 16:43:12.732666   40294 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 16:43:12.734045   40294 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-597780 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr: exit status 3 (18.844390617s)

                                                
                                                
-- stdout --
	ha-597780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-597780-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:43:12.778608   40740 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:43:12.778908   40740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:43:12.778919   40740 out.go:304] Setting ErrFile to fd 2...
	I0814 16:43:12.778923   40740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:43:12.779086   40740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:43:12.779244   40740 out.go:298] Setting JSON to false
	I0814 16:43:12.779271   40740 mustload.go:65] Loading cluster: ha-597780
	I0814 16:43:12.779376   40740 notify.go:220] Checking for updates...
	I0814 16:43:12.779677   40740 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:43:12.779697   40740 status.go:255] checking status of ha-597780 ...
	I0814 16:43:12.780101   40740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:43:12.780156   40740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:43:12.797603   40740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0814 16:43:12.798016   40740 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:43:12.798658   40740 main.go:141] libmachine: Using API Version  1
	I0814 16:43:12.798696   40740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:43:12.799035   40740 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:43:12.799237   40740 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:43:12.800944   40740 status.go:330] ha-597780 host status = "Running" (err=<nil>)
	I0814 16:43:12.800959   40740 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:43:12.801254   40740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:43:12.801293   40740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:43:12.817077   40740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35363
	I0814 16:43:12.817591   40740 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:43:12.818076   40740 main.go:141] libmachine: Using API Version  1
	I0814 16:43:12.818103   40740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:43:12.818482   40740 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:43:12.818701   40740 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:43:12.822262   40740 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:43:12.822812   40740 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:43:12.822841   40740 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:43:12.823115   40740 host.go:66] Checking if "ha-597780" exists ...
	I0814 16:43:12.823529   40740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:43:12.823578   40740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:43:12.838604   40740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
	I0814 16:43:12.838965   40740 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:43:12.839492   40740 main.go:141] libmachine: Using API Version  1
	I0814 16:43:12.839514   40740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:43:12.839813   40740 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:43:12.840148   40740 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:43:12.840396   40740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:43:12.840418   40740 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:43:12.842998   40740 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:43:12.843540   40740 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:43:12.843579   40740 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:43:12.843759   40740 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:43:12.843932   40740 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:43:12.844109   40740 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:43:12.844235   40740 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:43:12.924511   40740 ssh_runner.go:195] Run: systemctl --version
	I0814 16:43:12.932292   40740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:43:12.951589   40740 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:43:12.951629   40740 api_server.go:166] Checking apiserver status ...
	I0814 16:43:12.951670   40740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:43:12.966965   40740 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5034/cgroup
	W0814 16:43:12.976055   40740 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5034/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:43:12.976108   40740 ssh_runner.go:195] Run: ls
	I0814 16:43:12.980233   40740 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:43:12.984306   40740 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:43:12.984328   40740 status.go:422] ha-597780 apiserver status = Running (err=<nil>)
	I0814 16:43:12.984341   40740 status.go:257] ha-597780 status: &{Name:ha-597780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:43:12.984370   40740 status.go:255] checking status of ha-597780-m02 ...
	I0814 16:43:12.984648   40740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:43:12.984688   40740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:43:12.999378   40740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41343
	I0814 16:43:12.999834   40740 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:43:13.000271   40740 main.go:141] libmachine: Using API Version  1
	I0814 16:43:13.000290   40740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:43:13.000584   40740 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:43:13.000782   40740 main.go:141] libmachine: (ha-597780-m02) Calling .GetState
	I0814 16:43:13.002229   40740 status.go:330] ha-597780-m02 host status = "Running" (err=<nil>)
	I0814 16:43:13.002244   40740 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:43:13.002627   40740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:43:13.002678   40740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:43:13.016995   40740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34491
	I0814 16:43:13.017419   40740 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:43:13.017847   40740 main.go:141] libmachine: Using API Version  1
	I0814 16:43:13.017869   40740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:43:13.018134   40740 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:43:13.018274   40740 main.go:141] libmachine: (ha-597780-m02) Calling .GetIP
	I0814 16:43:13.021220   40740 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:43:13.021662   40740 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:37:44 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:43:13.021686   40740 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:43:13.021815   40740 host.go:66] Checking if "ha-597780-m02" exists ...
	I0814 16:43:13.022108   40740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:43:13.022149   40740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:43:13.037109   40740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37059
	I0814 16:43:13.037506   40740 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:43:13.037933   40740 main.go:141] libmachine: Using API Version  1
	I0814 16:43:13.037953   40740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:43:13.038251   40740 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:43:13.038442   40740 main.go:141] libmachine: (ha-597780-m02) Calling .DriverName
	I0814 16:43:13.038604   40740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:43:13.038620   40740 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHHostname
	I0814 16:43:13.041297   40740 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:43:13.041715   40740 main.go:141] libmachine: (ha-597780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:ae:4d", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:37:44 +0000 UTC Type:0 Mac:52:54:00:a6:ae:4d Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-597780-m02 Clientid:01:52:54:00:a6:ae:4d}
	I0814 16:43:13.041745   40740 main.go:141] libmachine: (ha-597780-m02) DBG | domain ha-597780-m02 has defined IP address 192.168.39.225 and MAC address 52:54:00:a6:ae:4d in network mk-ha-597780
	I0814 16:43:13.041834   40740 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHPort
	I0814 16:43:13.042007   40740 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHKeyPath
	I0814 16:43:13.042144   40740 main.go:141] libmachine: (ha-597780-m02) Calling .GetSSHUsername
	I0814 16:43:13.042266   40740 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m02/id_rsa Username:docker}
	I0814 16:43:13.132114   40740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:43:13.152011   40740 kubeconfig.go:125] found "ha-597780" server: "https://192.168.39.254:8443"
	I0814 16:43:13.152047   40740 api_server.go:166] Checking apiserver status ...
	I0814 16:43:13.152095   40740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:43:13.166254   40740 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup
	W0814 16:43:13.175838   40740 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:43:13.175893   40740 ssh_runner.go:195] Run: ls
	I0814 16:43:13.180004   40740 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0814 16:43:13.184055   40740 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0814 16:43:13.184078   40740 status.go:422] ha-597780-m02 apiserver status = Running (err=<nil>)
	I0814 16:43:13.184088   40740 status.go:257] ha-597780-m02 status: &{Name:ha-597780-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:43:13.184109   40740 status.go:255] checking status of ha-597780-m04 ...
	I0814 16:43:13.184407   40740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:43:13.184448   40740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:43:13.199251   40740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43829
	I0814 16:43:13.199727   40740 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:43:13.200222   40740 main.go:141] libmachine: Using API Version  1
	I0814 16:43:13.200254   40740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:43:13.200601   40740 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:43:13.200769   40740 main.go:141] libmachine: (ha-597780-m04) Calling .GetState
	I0814 16:43:13.202375   40740 status.go:330] ha-597780-m04 host status = "Running" (err=<nil>)
	I0814 16:43:13.202394   40740 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:43:13.202695   40740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:43:13.202753   40740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:43:13.218034   40740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38173
	I0814 16:43:13.218399   40740 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:43:13.218838   40740 main.go:141] libmachine: Using API Version  1
	I0814 16:43:13.218858   40740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:43:13.219179   40740 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:43:13.219464   40740 main.go:141] libmachine: (ha-597780-m04) Calling .GetIP
	I0814 16:43:13.222667   40740 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:43:13.223158   40740 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:40:41 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:43:13.223196   40740 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:43:13.223370   40740 host.go:66] Checking if "ha-597780-m04" exists ...
	I0814 16:43:13.223655   40740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:43:13.223689   40740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:43:13.239982   40740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0814 16:43:13.240405   40740 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:43:13.240831   40740 main.go:141] libmachine: Using API Version  1
	I0814 16:43:13.240847   40740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:43:13.241146   40740 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:43:13.241342   40740 main.go:141] libmachine: (ha-597780-m04) Calling .DriverName
	I0814 16:43:13.241588   40740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:43:13.241615   40740 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHHostname
	I0814 16:43:13.244436   40740 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:43:13.244896   40740 main.go:141] libmachine: (ha-597780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:79:99", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:40:41 +0000 UTC Type:0 Mac:52:54:00:b1:79:99 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:ha-597780-m04 Clientid:01:52:54:00:b1:79:99}
	I0814 16:43:13.244927   40740 main.go:141] libmachine: (ha-597780-m04) DBG | domain ha-597780-m04 has defined IP address 192.168.39.209 and MAC address 52:54:00:b1:79:99 in network mk-ha-597780
	I0814 16:43:13.245051   40740 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHPort
	I0814 16:43:13.245224   40740 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHKeyPath
	I0814 16:43:13.245419   40740 main.go:141] libmachine: (ha-597780-m04) Calling .GetSSHUsername
	I0814 16:43:13.245593   40740 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780-m04/id_rsa Username:docker}
	W0814 16:43:31.579600   40740 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.209:22: connect: no route to host
	W0814 16:43:31.579710   40740 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.209:22: connect: no route to host
	E0814 16:43:31.579728   40740 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.209:22: connect: no route to host
	I0814 16:43:31.579736   40740 status.go:257] ha-597780-m04 status: &{Name:ha-597780-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0814 16:43:31.579749   40740 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.209:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-597780 -n ha-597780
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-597780 logs -n 25: (1.604983777s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-597780 ssh -n ha-597780-m02 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m03_ha-597780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04:/home/docker/cp-test_ha-597780-m03_ha-597780-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m04 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m03_ha-597780-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-597780 cp testdata/cp-test.txt                                                | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3967682573/001/cp-test_ha-597780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780:/home/docker/cp-test_ha-597780-m04_ha-597780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780 sudo cat                                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m04_ha-597780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m02:/home/docker/cp-test_ha-597780-m04_ha-597780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m02 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m04_ha-597780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m03:/home/docker/cp-test_ha-597780-m04_ha-597780-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n                                                                 | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | ha-597780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-597780 ssh -n ha-597780-m03 sudo cat                                          | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC | 14 Aug 24 16:30 UTC |
	|         | /home/docker/cp-test_ha-597780-m04_ha-597780-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-597780 node stop m02 -v=7                                                     | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-597780 node start m02 -v=7                                                    | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:32 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-597780 -v=7                                                           | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-597780 -v=7                                                                | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-597780 --wait=true -v=7                                                    | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:35 UTC | 14 Aug 24 16:40 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-597780                                                                | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:40 UTC |                     |
	| node    | ha-597780 node delete m03 -v=7                                                   | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:40 UTC | 14 Aug 24 16:41 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-597780 stop -v=7                                                              | ha-597780 | jenkins | v1.33.1 | 14 Aug 24 16:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 16:35:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 16:35:59.976231   38304 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:35:59.976478   38304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:35:59.976486   38304 out.go:304] Setting ErrFile to fd 2...
	I0814 16:35:59.976491   38304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:35:59.976653   38304 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:35:59.977237   38304 out.go:298] Setting JSON to false
	I0814 16:35:59.978180   38304 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4704,"bootTime":1723648656,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:35:59.978233   38304 start.go:139] virtualization: kvm guest
	I0814 16:35:59.980770   38304 out.go:177] * [ha-597780] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:35:59.982118   38304 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:35:59.982133   38304 notify.go:220] Checking for updates...
	I0814 16:35:59.984435   38304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:35:59.985844   38304 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:35:59.987052   38304 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:35:59.988281   38304 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:35:59.989533   38304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:35:59.991381   38304 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:35:59.991491   38304 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:35:59.991932   38304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:35:59.992011   38304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:36:00.006624   38304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
	I0814 16:36:00.007076   38304 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:36:00.007589   38304 main.go:141] libmachine: Using API Version  1
	I0814 16:36:00.007609   38304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:36:00.008014   38304 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:36:00.008196   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:36:00.044240   38304 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 16:36:00.045543   38304 start.go:297] selected driver: kvm2
	I0814 16:36:00.045557   38304 start.go:901] validating driver "kvm2" against &{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.209 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:36:00.045733   38304 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:36:00.046169   38304 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:36:00.046256   38304 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 16:36:00.061008   38304 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 16:36:00.061723   38304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 16:36:00.061807   38304 cni.go:84] Creating CNI manager for ""
	I0814 16:36:00.061823   38304 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0814 16:36:00.061884   38304 start.go:340] cluster config:
	{Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.209 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:36:00.062029   38304 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:36:00.063777   38304 out.go:177] * Starting "ha-597780" primary control-plane node in "ha-597780" cluster
	I0814 16:36:00.065215   38304 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:36:00.065260   38304 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:36:00.065271   38304 cache.go:56] Caching tarball of preloaded images
	I0814 16:36:00.065368   38304 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 16:36:00.065394   38304 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 16:36:00.065506   38304 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/config.json ...
	I0814 16:36:00.065787   38304 start.go:360] acquireMachinesLock for ha-597780: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 16:36:00.065855   38304 start.go:364] duration metric: took 41.326µs to acquireMachinesLock for "ha-597780"
	I0814 16:36:00.065878   38304 start.go:96] Skipping create...Using existing machine configuration
	I0814 16:36:00.065902   38304 fix.go:54] fixHost starting: 
	I0814 16:36:00.066346   38304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:36:00.066395   38304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:36:00.080986   38304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37049
	I0814 16:36:00.081450   38304 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:36:00.082058   38304 main.go:141] libmachine: Using API Version  1
	I0814 16:36:00.082080   38304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:36:00.082479   38304 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:36:00.082723   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:36:00.082905   38304 main.go:141] libmachine: (ha-597780) Calling .GetState
	I0814 16:36:00.084804   38304 fix.go:112] recreateIfNeeded on ha-597780: state=Running err=<nil>
	W0814 16:36:00.084825   38304 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 16:36:00.087430   38304 out.go:177] * Updating the running kvm2 "ha-597780" VM ...
	I0814 16:36:00.088754   38304 machine.go:94] provisionDockerMachine start ...
	I0814 16:36:00.088769   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:36:00.088949   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:36:00.091354   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.091786   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.091815   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.091949   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:36:00.092133   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.092301   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.092436   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:36:00.092601   38304 main.go:141] libmachine: Using SSH client type: native
	I0814 16:36:00.092774   38304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:36:00.092785   38304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 16:36:00.196491   38304 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-597780
	
	I0814 16:36:00.196523   38304 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:36:00.196796   38304 buildroot.go:166] provisioning hostname "ha-597780"
	I0814 16:36:00.196837   38304 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:36:00.197039   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:36:00.199656   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.199982   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.200009   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.200167   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:36:00.200352   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.200500   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.200616   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:36:00.200755   38304 main.go:141] libmachine: Using SSH client type: native
	I0814 16:36:00.200920   38304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:36:00.200932   38304 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-597780 && echo "ha-597780" | sudo tee /etc/hostname
	I0814 16:36:00.314158   38304 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-597780
	
	I0814 16:36:00.314187   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:36:00.317090   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.317426   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.317452   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.317703   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:36:00.317904   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.318059   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.318232   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:36:00.318415   38304 main.go:141] libmachine: Using SSH client type: native
	I0814 16:36:00.318635   38304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:36:00.318656   38304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-597780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-597780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-597780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 16:36:00.415943   38304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 16:36:00.415972   38304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 16:36:00.416003   38304 buildroot.go:174] setting up certificates
	I0814 16:36:00.416018   38304 provision.go:84] configureAuth start
	I0814 16:36:00.416027   38304 main.go:141] libmachine: (ha-597780) Calling .GetMachineName
	I0814 16:36:00.416307   38304 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:36:00.418868   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.419237   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.419274   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.419447   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:36:00.421573   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.422025   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.422051   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.422188   38304 provision.go:143] copyHostCerts
	I0814 16:36:00.422220   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:36:00.422251   38304 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 16:36:00.422259   38304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 16:36:00.422322   38304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 16:36:00.422426   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:36:00.422453   38304 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 16:36:00.422459   38304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 16:36:00.422499   38304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 16:36:00.422586   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:36:00.422609   38304 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 16:36:00.422617   38304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 16:36:00.422654   38304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 16:36:00.422747   38304 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.ha-597780 san=[127.0.0.1 192.168.39.4 ha-597780 localhost minikube]
	I0814 16:36:00.512554   38304 provision.go:177] copyRemoteCerts
	I0814 16:36:00.512615   38304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 16:36:00.512638   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:36:00.515444   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.515823   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.515851   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.516076   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:36:00.516265   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.516439   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:36:00.516582   38304 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:36:00.597306   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 16:36:00.597379   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 16:36:00.620191   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 16:36:00.620250   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 16:36:00.645421   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 16:36:00.645482   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0814 16:36:00.668197   38304 provision.go:87] duration metric: took 252.165479ms to configureAuth
	I0814 16:36:00.668230   38304 buildroot.go:189] setting minikube options for container-runtime
	I0814 16:36:00.668516   38304 config.go:182] Loaded profile config "ha-597780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:36:00.668608   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:36:00.671433   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.671869   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:36:00.671900   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:36:00.672111   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:36:00.672275   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.672408   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:36:00.672541   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:36:00.672742   38304 main.go:141] libmachine: Using SSH client type: native
	I0814 16:36:00.672942   38304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:36:00.672968   38304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 16:37:31.423958   38304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 16:37:31.423988   38304 machine.go:97] duration metric: took 1m31.335222511s to provisionDockerMachine
	I0814 16:37:31.424000   38304 start.go:293] postStartSetup for "ha-597780" (driver="kvm2")
	I0814 16:37:31.424011   38304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 16:37:31.424028   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:37:31.424392   38304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 16:37:31.424416   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:37:31.427833   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.428310   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:31.428336   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.428500   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:37:31.428673   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:37:31.428812   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:37:31.428962   38304 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:37:31.510576   38304 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 16:37:31.514529   38304 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 16:37:31.514557   38304 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 16:37:31.514619   38304 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 16:37:31.514719   38304 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 16:37:31.514732   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /etc/ssl/certs/211772.pem
	I0814 16:37:31.514858   38304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 16:37:31.524175   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:37:31.547624   38304 start.go:296] duration metric: took 123.609641ms for postStartSetup
	I0814 16:37:31.547670   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:37:31.547948   38304 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0814 16:37:31.547972   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:37:31.550732   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.551052   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:31.551074   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.551273   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:37:31.551477   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:37:31.551650   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:37:31.551795   38304 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	W0814 16:37:31.629152   38304 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0814 16:37:31.629175   38304 fix.go:56] duration metric: took 1m31.563287641s for fixHost
	I0814 16:37:31.629195   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:37:31.632193   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.632539   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:31.632577   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.632732   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:37:31.632919   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:37:31.633105   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:37:31.633248   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:37:31.633418   38304 main.go:141] libmachine: Using SSH client type: native
	I0814 16:37:31.633629   38304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.4 22 <nil> <nil>}
	I0814 16:37:31.633645   38304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 16:37:31.731807   38304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723653451.684969952
	
	I0814 16:37:31.731830   38304 fix.go:216] guest clock: 1723653451.684969952
	I0814 16:37:31.731837   38304 fix.go:229] Guest: 2024-08-14 16:37:31.684969952 +0000 UTC Remote: 2024-08-14 16:37:31.629181773 +0000 UTC m=+91.687471026 (delta=55.788179ms)
	I0814 16:37:31.731855   38304 fix.go:200] guest clock delta is within tolerance: 55.788179ms
	I0814 16:37:31.731861   38304 start.go:83] releasing machines lock for "ha-597780", held for 1m31.665992819s
	I0814 16:37:31.731884   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:37:31.732143   38304 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:37:31.735105   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.735542   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:31.735577   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.735757   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:37:31.736254   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:37:31.736461   38304 main.go:141] libmachine: (ha-597780) Calling .DriverName
	I0814 16:37:31.736577   38304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 16:37:31.736621   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:37:31.736674   38304 ssh_runner.go:195] Run: cat /version.json
	I0814 16:37:31.736697   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHHostname
	I0814 16:37:31.739283   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.739410   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.739779   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:31.739842   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:31.739865   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.739881   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:31.739944   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:37:31.740074   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHPort
	I0814 16:37:31.740142   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:37:31.740226   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHKeyPath
	I0814 16:37:31.740315   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:37:31.740373   38304 main.go:141] libmachine: (ha-597780) Calling .GetSSHUsername
	I0814 16:37:31.740496   38304 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:37:31.740554   38304 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/ha-597780/id_rsa Username:docker}
	I0814 16:37:31.852886   38304 ssh_runner.go:195] Run: systemctl --version
	I0814 16:37:31.858678   38304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 16:37:32.016098   38304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 16:37:32.023291   38304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 16:37:32.023371   38304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 16:37:32.031883   38304 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0814 16:37:32.031901   38304 start.go:495] detecting cgroup driver to use...
	I0814 16:37:32.031958   38304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 16:37:32.046647   38304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 16:37:32.059641   38304 docker.go:217] disabling cri-docker service (if available) ...
	I0814 16:37:32.059699   38304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 16:37:32.072485   38304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 16:37:32.085345   38304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 16:37:32.234125   38304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 16:37:32.370411   38304 docker.go:233] disabling docker service ...
	I0814 16:37:32.370495   38304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 16:37:32.386049   38304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 16:37:32.399257   38304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 16:37:32.537900   38304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 16:37:32.677132   38304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 16:37:32.690524   38304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 16:37:32.708081   38304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 16:37:32.708142   38304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.718154   38304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 16:37:32.718222   38304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.728032   38304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.737888   38304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.747340   38304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 16:37:32.757112   38304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.767552   38304 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.777965   38304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 16:37:32.787811   38304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 16:37:32.797641   38304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 16:37:32.806785   38304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:37:32.951248   38304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 16:37:33.225205   38304 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 16:37:33.225268   38304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 16:37:33.231641   38304 start.go:563] Will wait 60s for crictl version
	I0814 16:37:33.231685   38304 ssh_runner.go:195] Run: which crictl
	I0814 16:37:33.235367   38304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 16:37:33.271002   38304 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 16:37:33.271090   38304 ssh_runner.go:195] Run: crio --version
	I0814 16:37:33.299017   38304 ssh_runner.go:195] Run: crio --version
	I0814 16:37:33.330758   38304 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 16:37:33.332218   38304 main.go:141] libmachine: (ha-597780) Calling .GetIP
	I0814 16:37:33.335407   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:33.335852   38304 main.go:141] libmachine: (ha-597780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:0e:d3", ip: ""} in network mk-ha-597780: {Iface:virbr1 ExpiryTime:2024-08-14 17:25:30 +0000 UTC Type:0 Mac:52:54:00:d7:0e:d3 Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:ha-597780 Clientid:01:52:54:00:d7:0e:d3}
	I0814 16:37:33.335879   38304 main.go:141] libmachine: (ha-597780) DBG | domain ha-597780 has defined IP address 192.168.39.4 and MAC address 52:54:00:d7:0e:d3 in network mk-ha-597780
	I0814 16:37:33.336090   38304 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 16:37:33.340785   38304 kubeadm.go:883] updating cluster {Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.209 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 16:37:33.340924   38304 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:37:33.340965   38304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:37:33.385162   38304 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 16:37:33.385187   38304 crio.go:433] Images already preloaded, skipping extraction
	I0814 16:37:33.385244   38304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 16:37:33.421801   38304 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 16:37:33.421825   38304 cache_images.go:84] Images are preloaded, skipping loading
	I0814 16:37:33.421833   38304 kubeadm.go:934] updating node { 192.168.39.4 8443 v1.31.0 crio true true} ...
	I0814 16:37:33.421955   38304 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-597780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 16:37:33.422038   38304 ssh_runner.go:195] Run: crio config
	I0814 16:37:33.473783   38304 cni.go:84] Creating CNI manager for ""
	I0814 16:37:33.473807   38304 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0814 16:37:33.473819   38304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 16:37:33.473849   38304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.4 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-597780 NodeName:ha-597780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 16:37:33.473985   38304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-597780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 16:37:33.474002   38304 kube-vip.go:115] generating kube-vip config ...
	I0814 16:37:33.474047   38304 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0814 16:37:33.485053   38304 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0814 16:37:33.485190   38304 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0814 16:37:33.485245   38304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 16:37:33.494772   38304 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 16:37:33.494837   38304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0814 16:37:33.503758   38304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0814 16:37:33.520141   38304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 16:37:33.536125   38304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0814 16:37:33.553365   38304 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0814 16:37:33.569739   38304 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0814 16:37:33.574714   38304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 16:37:33.722569   38304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 16:37:33.737273   38304 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780 for IP: 192.168.39.4
	I0814 16:37:33.737305   38304 certs.go:194] generating shared ca certs ...
	I0814 16:37:33.737328   38304 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:37:33.737516   38304 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 16:37:33.737595   38304 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 16:37:33.737613   38304 certs.go:256] generating profile certs ...
	I0814 16:37:33.737743   38304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/client.key
	I0814 16:37:33.737783   38304 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.3fce3d93
	I0814 16:37:33.737815   38304 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.3fce3d93 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.4 192.168.39.225 192.168.39.167 192.168.39.254]
	I0814 16:37:33.979222   38304 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.3fce3d93 ...
	I0814 16:37:33.979256   38304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.3fce3d93: {Name:mkb87fe715cb554aa1237444086f355a72cf705b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:37:33.979464   38304 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.3fce3d93 ...
	I0814 16:37:33.979481   38304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.3fce3d93: {Name:mk777447d0b1ce75f45ec8e2dd80f852f96d3182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 16:37:33.979573   38304 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt.3fce3d93 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt
	I0814 16:37:33.979742   38304 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key.3fce3d93 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key
	I0814 16:37:33.979882   38304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key
	I0814 16:37:33.979898   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 16:37:33.979912   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 16:37:33.979926   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 16:37:33.979942   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 16:37:33.979954   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 16:37:33.979969   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 16:37:33.979981   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 16:37:33.979992   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 16:37:33.980057   38304 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 16:37:33.980107   38304 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 16:37:33.980119   38304 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 16:37:33.980168   38304 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 16:37:33.980195   38304 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 16:37:33.980223   38304 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 16:37:33.980266   38304 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 16:37:33.980306   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:37:33.980324   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem -> /usr/share/ca-certificates/21177.pem
	I0814 16:37:33.980336   38304 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /usr/share/ca-certificates/211772.pem
	I0814 16:37:33.980895   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 16:37:34.006237   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 16:37:34.029028   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 16:37:34.051558   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 16:37:34.074765   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 16:37:34.098415   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 16:37:34.120496   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 16:37:34.143834   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/ha-597780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 16:37:34.166784   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 16:37:34.189574   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 16:37:34.212275   38304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 16:37:34.234445   38304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 16:37:34.249584   38304 ssh_runner.go:195] Run: openssl version
	I0814 16:37:34.255027   38304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 16:37:34.264747   38304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:37:34.269179   38304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:37:34.269226   38304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 16:37:34.274326   38304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 16:37:34.282590   38304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 16:37:34.292091   38304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 16:37:34.296191   38304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 16:37:34.296236   38304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 16:37:34.301356   38304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 16:37:34.309784   38304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 16:37:34.319919   38304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 16:37:34.323712   38304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 16:37:34.323746   38304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 16:37:34.329232   38304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 16:37:34.337955   38304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 16:37:34.342042   38304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 16:37:34.349900   38304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 16:37:34.358163   38304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 16:37:34.367648   38304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 16:37:34.376229   38304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 16:37:34.384894   38304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 16:37:34.395015   38304 kubeadm.go:392] StartCluster: {Name:ha-597780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-597780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.167 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.209 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:37:34.395196   38304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 16:37:34.395253   38304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 16:37:34.470623   38304 cri.go:89] found id: "1e41add6fd3aadca92ebd87cbc3ca06c6e52b5219af598bc986a599626b3fea0"
	I0814 16:37:34.470647   38304 cri.go:89] found id: "c12bbd0fec638d4c0fa1fe3168f7f54b850d776c969bab8fb5a129fd9a1ff017"
	I0814 16:37:34.470651   38304 cri.go:89] found id: "2523827ba24c337126d2deaf39a69d56b9b5730b94440e598ae0a21caa13a627"
	I0814 16:37:34.470655   38304 cri.go:89] found id: "422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c"
	I0814 16:37:34.470658   38304 cri.go:89] found id: "e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d"
	I0814 16:37:34.470663   38304 cri.go:89] found id: "fdde6ae1e8d74427216ede0d7dad128cd2183769f04fab964ea0060a3dd2b1ee"
	I0814 16:37:34.470669   38304 cri.go:89] found id: "9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1"
	I0814 16:37:34.470674   38304 cri.go:89] found id: "37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb"
	I0814 16:37:34.470679   38304 cri.go:89] found id: "f67f9d9915d534085918d0529b19548940cd4887f3fcff515d5c5cf62eece770"
	I0814 16:37:34.470691   38304 cri.go:89] found id: "be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905"
	I0814 16:37:34.470697   38304 cri.go:89] found id: "72903e605408111be84917c525af67e79889822f24a9cf8ba1b60605ecc495fd"
	I0814 16:37:34.470702   38304 cri.go:89] found id: "9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e"
	I0814 16:37:34.470708   38304 cri.go:89] found id: "4ad80a864cc602ff3ed5231f18c40e60acb39b91e37eb9ecf4ac327c268587ea"
	I0814 16:37:34.470715   38304 cri.go:89] found id: ""
	I0814 16:37:34.470761   38304 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.158654312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653812158629328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3be08b1-246c-410c-a008-ccfdb6013a1a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.159325759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f7bedc3-c846-457d-82fd-1a459d31f1b3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.159390021Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f7bedc3-c846-457d-82fd-1a459d31f1b3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.159795514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d7a047a63d4f358401ba14edbe7ae75853efb926363557abe896e917a35c6e1,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723653531871374687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0090111a9078cd7d7114e8e41eba8b0e3e9244a6d56c800001d55c647de047,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723653502868507192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0feebc4c91acc20973f940c45d9b14cd44c58400f983e72d31ca4be3ec4fd4b1,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723653501865848163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2443fafb925cc387eea7c3e1f71a41139be3161d3ba5fde8e40940fb2d07970b,PodSandboxId:e2479ec996bb180972116be2f16961d9414ef84345e1873b2e61fe87616f6fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723653491125823576,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981eb8296cdeb6d40401b0a529c6358f12551effc26a6a2c5217c4bcd27779ce,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723653490860603676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31cdbd2a724ef44a5f78908dc3852ec9665db36cf9096de1f2e03f97d304b3,PodSandboxId:69b675c5debdafe5c79208c06321cddca332e097a71edf3f8913724a3cefd86d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723653468195833639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6eda7162bf969e95f0578138dd8c6ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4f3f03c5946821483db35d82adadf94e716c80acefdfa9b86eeca5126ebdea,PodSandboxId:d58a265d2473cd71dbd422a2a7066f73f19e42e351c0631f89110b23ca227b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723653458910000622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:71c507d68d37b6072cf0b51abc2fff7f57582c574a8ec265020f3676b0d5682f,PodSandboxId:fd01497642c1d80c907572a4d3306fec7914bdb073b6a4bd0de2d777fa5d4958,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723653457889718585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e4820d9
35853d422990adfe150efcf30cf4f9e5d613b73f919609928c16df7,PodSandboxId:749b6336be4d88594fdf5f67a1f64f8fe9b307a1d090b2511b034dd05ce413b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653457839833373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047dd2746b2ff4c8d2e079bf9e0be2e3f51cb4e115f58578ac5fc150d0b5ec89,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723653457705328500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d453751eb78a43af3188f0c9f5c0f9ded6beb22938705c7c95989b7681bc2e,PodSandboxId:14b128d6cb5027649ee08e04f38180e670b5fb57031cb53668b1f942bd4245f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723653457660851153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f82521456895dcd96d833472a98b47f70324216f760e52a3f5d261531298f,PodSandboxId:6e9c89800b459955c596655cc3cee47f63fd440204b88153673e89ad5eb175f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723653457646958209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1bda5de444ee7b1f76b21acfc57a04e9f13279c7d1c868858a723a1af6d5b0,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723653457539829908,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f031e182fbc1d4e970b42cad69f5b0b5bd9c992b61b42337fd35916e56ef8579,PodSandboxId:9c9eb56944555998bd25081c57daf5bf25e04dcac2037f576690941fd2f65ae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653454561571505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723652958530125849,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778224096082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778200790933,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723652766365339973,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723652762447302359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723652750804163125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723652750785390188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f7bedc3-c846-457d-82fd-1a459d31f1b3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.198127434Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=871e57b1-2704-4afb-8391-ac3ce1c8ae7d name=/runtime.v1.RuntimeService/Version
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.198202797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=871e57b1-2704-4afb-8391-ac3ce1c8ae7d name=/runtime.v1.RuntimeService/Version
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.200151128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a6f1afe-4fd9-4f11-a10a-47b3058bd5c9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.200662852Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653812200638935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a6f1afe-4fd9-4f11-a10a-47b3058bd5c9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.201200052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff14b665-ec79-45e7-9784-51d38ab9535f name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.201305748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff14b665-ec79-45e7-9784-51d38ab9535f name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.201713403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d7a047a63d4f358401ba14edbe7ae75853efb926363557abe896e917a35c6e1,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723653531871374687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0090111a9078cd7d7114e8e41eba8b0e3e9244a6d56c800001d55c647de047,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723653502868507192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0feebc4c91acc20973f940c45d9b14cd44c58400f983e72d31ca4be3ec4fd4b1,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723653501865848163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2443fafb925cc387eea7c3e1f71a41139be3161d3ba5fde8e40940fb2d07970b,PodSandboxId:e2479ec996bb180972116be2f16961d9414ef84345e1873b2e61fe87616f6fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723653491125823576,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981eb8296cdeb6d40401b0a529c6358f12551effc26a6a2c5217c4bcd27779ce,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723653490860603676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31cdbd2a724ef44a5f78908dc3852ec9665db36cf9096de1f2e03f97d304b3,PodSandboxId:69b675c5debdafe5c79208c06321cddca332e097a71edf3f8913724a3cefd86d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723653468195833639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6eda7162bf969e95f0578138dd8c6ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4f3f03c5946821483db35d82adadf94e716c80acefdfa9b86eeca5126ebdea,PodSandboxId:d58a265d2473cd71dbd422a2a7066f73f19e42e351c0631f89110b23ca227b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723653458910000622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:71c507d68d37b6072cf0b51abc2fff7f57582c574a8ec265020f3676b0d5682f,PodSandboxId:fd01497642c1d80c907572a4d3306fec7914bdb073b6a4bd0de2d777fa5d4958,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723653457889718585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e4820d9
35853d422990adfe150efcf30cf4f9e5d613b73f919609928c16df7,PodSandboxId:749b6336be4d88594fdf5f67a1f64f8fe9b307a1d090b2511b034dd05ce413b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653457839833373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047dd2746b2ff4c8d2e079bf9e0be2e3f51cb4e115f58578ac5fc150d0b5ec89,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723653457705328500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d453751eb78a43af3188f0c9f5c0f9ded6beb22938705c7c95989b7681bc2e,PodSandboxId:14b128d6cb5027649ee08e04f38180e670b5fb57031cb53668b1f942bd4245f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723653457660851153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f82521456895dcd96d833472a98b47f70324216f760e52a3f5d261531298f,PodSandboxId:6e9c89800b459955c596655cc3cee47f63fd440204b88153673e89ad5eb175f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723653457646958209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1bda5de444ee7b1f76b21acfc57a04e9f13279c7d1c868858a723a1af6d5b0,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723653457539829908,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f031e182fbc1d4e970b42cad69f5b0b5bd9c992b61b42337fd35916e56ef8579,PodSandboxId:9c9eb56944555998bd25081c57daf5bf25e04dcac2037f576690941fd2f65ae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653454561571505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723652958530125849,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778224096082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778200790933,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723652766365339973,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723652762447302359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723652750804163125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723652750785390188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff14b665-ec79-45e7-9784-51d38ab9535f name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.241314538Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37ce6ca9-9e13-4713-893e-7f525ee3df56 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.241403857Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37ce6ca9-9e13-4713-893e-7f525ee3df56 name=/runtime.v1.RuntimeService/Version
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.242589448Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e349885a-e1a1-4473-b2c1-e3d30c527836 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.243494800Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653812243430709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e349885a-e1a1-4473-b2c1-e3d30c527836 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.244472473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c15ace4-5642-40ab-826c-158f773e02e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.244533997Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c15ace4-5642-40ab-826c-158f773e02e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.244914598Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d7a047a63d4f358401ba14edbe7ae75853efb926363557abe896e917a35c6e1,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723653531871374687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0090111a9078cd7d7114e8e41eba8b0e3e9244a6d56c800001d55c647de047,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723653502868507192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0feebc4c91acc20973f940c45d9b14cd44c58400f983e72d31ca4be3ec4fd4b1,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723653501865848163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2443fafb925cc387eea7c3e1f71a41139be3161d3ba5fde8e40940fb2d07970b,PodSandboxId:e2479ec996bb180972116be2f16961d9414ef84345e1873b2e61fe87616f6fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723653491125823576,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981eb8296cdeb6d40401b0a529c6358f12551effc26a6a2c5217c4bcd27779ce,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723653490860603676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31cdbd2a724ef44a5f78908dc3852ec9665db36cf9096de1f2e03f97d304b3,PodSandboxId:69b675c5debdafe5c79208c06321cddca332e097a71edf3f8913724a3cefd86d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723653468195833639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6eda7162bf969e95f0578138dd8c6ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4f3f03c5946821483db35d82adadf94e716c80acefdfa9b86eeca5126ebdea,PodSandboxId:d58a265d2473cd71dbd422a2a7066f73f19e42e351c0631f89110b23ca227b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723653458910000622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:71c507d68d37b6072cf0b51abc2fff7f57582c574a8ec265020f3676b0d5682f,PodSandboxId:fd01497642c1d80c907572a4d3306fec7914bdb073b6a4bd0de2d777fa5d4958,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723653457889718585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e4820d9
35853d422990adfe150efcf30cf4f9e5d613b73f919609928c16df7,PodSandboxId:749b6336be4d88594fdf5f67a1f64f8fe9b307a1d090b2511b034dd05ce413b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653457839833373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047dd2746b2ff4c8d2e079bf9e0be2e3f51cb4e115f58578ac5fc150d0b5ec89,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723653457705328500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d453751eb78a43af3188f0c9f5c0f9ded6beb22938705c7c95989b7681bc2e,PodSandboxId:14b128d6cb5027649ee08e04f38180e670b5fb57031cb53668b1f942bd4245f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723653457660851153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f82521456895dcd96d833472a98b47f70324216f760e52a3f5d261531298f,PodSandboxId:6e9c89800b459955c596655cc3cee47f63fd440204b88153673e89ad5eb175f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723653457646958209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1bda5de444ee7b1f76b21acfc57a04e9f13279c7d1c868858a723a1af6d5b0,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723653457539829908,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f031e182fbc1d4e970b42cad69f5b0b5bd9c992b61b42337fd35916e56ef8579,PodSandboxId:9c9eb56944555998bd25081c57daf5bf25e04dcac2037f576690941fd2f65ae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653454561571505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723652958530125849,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778224096082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778200790933,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723652766365339973,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723652762447302359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723652750804163125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723652750785390188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c15ace4-5642-40ab-826c-158f773e02e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.284574165Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98b3ba20-1ada-4d27-8e5e-f96ea97877dd name=/runtime.v1.RuntimeService/Version
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.284656880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98b3ba20-1ada-4d27-8e5e-f96ea97877dd name=/runtime.v1.RuntimeService/Version
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.285827617Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61df53ea-8aba-45a5-900d-9f2bb866654c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.286493344Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653812286466430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61df53ea-8aba-45a5-900d-9f2bb866654c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.286978717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=414835da-671b-4cc5-b7e1-50719a99807c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.287028964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=414835da-671b-4cc5-b7e1-50719a99807c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 16:43:32 ha-597780 crio[3567]: time="2024-08-14 16:43:32.287506502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d7a047a63d4f358401ba14edbe7ae75853efb926363557abe896e917a35c6e1,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723653531871374687,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b0090111a9078cd7d7114e8e41eba8b0e3e9244a6d56c800001d55c647de047,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723653502868507192,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0feebc4c91acc20973f940c45d9b14cd44c58400f983e72d31ca4be3ec4fd4b1,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723653501865848163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2443fafb925cc387eea7c3e1f71a41139be3161d3ba5fde8e40940fb2d07970b,PodSandboxId:e2479ec996bb180972116be2f16961d9414ef84345e1873b2e61fe87616f6fcc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723653491125823576,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:981eb8296cdeb6d40401b0a529c6358f12551effc26a6a2c5217c4bcd27779ce,PodSandboxId:352ccf859fcf6add2e258cbddf3a1ca3d9938be679b4cc9f8ee3db79d440fc9a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723653490860603676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9939439d-cddd-4505-b554-b72f749269fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d31cdbd2a724ef44a5f78908dc3852ec9665db36cf9096de1f2e03f97d304b3,PodSandboxId:69b675c5debdafe5c79208c06321cddca332e097a71edf3f8913724a3cefd86d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723653468195833639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6eda7162bf969e95f0578138dd8c6ad,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4f3f03c5946821483db35d82adadf94e716c80acefdfa9b86eeca5126ebdea,PodSandboxId:d58a265d2473cd71dbd422a2a7066f73f19e42e351c0631f89110b23ca227b6c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723653458910000622,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:71c507d68d37b6072cf0b51abc2fff7f57582c574a8ec265020f3676b0d5682f,PodSandboxId:fd01497642c1d80c907572a4d3306fec7914bdb073b6a4bd0de2d777fa5d4958,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723653457889718585,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96e4820d9
35853d422990adfe150efcf30cf4f9e5d613b73f919609928c16df7,PodSandboxId:749b6336be4d88594fdf5f67a1f64f8fe9b307a1d090b2511b034dd05ce413b6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653457839833373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047dd2746b2ff4c8d2e079bf9e0be2e3f51cb4e115f58578ac5fc150d0b5ec89,PodSandboxId:c127b102483e0f48fa5f3686fa3c1aa912e6061d57510d71b8db5d42b59097e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723653457705328500,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f561a4998ad7d50b7600c5793dffc8dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount
: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d453751eb78a43af3188f0c9f5c0f9ded6beb22938705c7c95989b7681bc2e,PodSandboxId:14b128d6cb5027649ee08e04f38180e670b5fb57031cb53668b1f942bd4245f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723653457660851153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804f82521456895dcd96d833472a98b47f70324216f760e52a3f5d261531298f,PodSandboxId:6e9c89800b459955c596655cc3cee47f63fd440204b88153673e89ad5eb175f6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723653457646958209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd1bda5de444ee7b1f76b21acfc57a04e9f13279c7d1c868858a723a1af6d5b0,PodSandboxId:26c626804c784ae803ec23d11862aaa18642588a2450782e1e41f1a8f495b537,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723653457539829908,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9d9336ca03d755bb866a3122f131c5c,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f031e182fbc1d4e970b42cad69f5b0b5bd9c992b61b42337fd35916e56ef8579,PodSandboxId:9c9eb56944555998bd25081c57daf5bf25e04dcac2037f576690941fd2f65ae0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723653454561571505,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e27a742b157d350e4dd27f02811c7d2c11620cf6f810639e137d2b2bf4f7bbe8,PodSandboxId:24fc5367bc64fe8e3ad77223a59b6638781ac1a1e856865b007687c2018ae317,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723652958530125849,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-rq7wd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1cd22b55-7981-4a29-8365-557fc17a8ae1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c,PodSandboxId:103da8631543805d53a96e35df1afd2e07dfbd34830a7a65cf52f0612b635298,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778224096082,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-28k2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec3725c1-3e21-49b0-9caf-922ef1928ed8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d,PodSandboxId:6b4d32c83825af96e6e8409dce716cc0f1455f390ee17e94f32bd0754a1da6ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723652778200790933,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kc84b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a483f17-cab5-4090-abc6-808d84397a8a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1,PodSandboxId:7c496d8d976b0de14dae80b4c6a69892526ae225797e0bb789cf339756839ef0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723652766365339973,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zm75h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e5eabaf-5973-4658-b12b-f7faf67b8af7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb,PodSandboxId:403a7dadd2cf18d356368f7dc6e6a3909e83b8b86053fbeb1f73dc49bb1c5e74,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723652762447302359,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79txl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea48ab09-60d5-4133-accc-f3fd69a50c5d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905,PodSandboxId:c3627f4eb54717525fabbce048a0f25a0aecc173e23825529706f722cb14aaf1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723652750804163125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73a9cba43895665a491de601c899e0bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e,PodSandboxId:dfba8d4d791ac767fa7a8460ca235eb405434cd208b6c4678315ae851e5a011d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1723652750785390188,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-597780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 557e39ea39f4993c51b28b9eeb9a1dd9,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=414835da-671b-4cc5-b7e1-50719a99807c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d7a047a63d4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   352ccf859fcf6       storage-provisioner
	0b0090111a907       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Running             kube-apiserver            3                   26c626804c784       kube-apiserver-ha-597780
	0feebc4c91acc       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Running             kube-controller-manager   2                   c127b102483e0       kube-controller-manager-ha-597780
	2443fafb925cc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   e2479ec996bb1       busybox-7dff88458-rq7wd
	981eb8296cdeb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   352ccf859fcf6       storage-provisioner
	1d31cdbd2a724       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   69b675c5debda       kube-vip-ha-597780
	bd4f3f03c5946       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   d58a265d2473c       kube-proxy-79txl
	71c507d68d37b       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   fd01497642c1d       kindnet-zm75h
	96e4820d93585       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   749b6336be4d8       coredns-6f6b679f8f-28k2m
	047dd2746b2ff       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   1                   c127b102483e0       kube-controller-manager-ha-597780
	78d453751eb78       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   14b128d6cb502       etcd-ha-597780
	804f825214568       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   6e9c89800b459       kube-scheduler-ha-597780
	bd1bda5de444e       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   26c626804c784       kube-apiserver-ha-597780
	f031e182fbc1d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   9c9eb56944555       coredns-6f6b679f8f-kc84b
	e27a742b157d3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago      Exited              busybox                   0                   24fc5367bc64f       busybox-7dff88458-rq7wd
	422bd8a4c6f73       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   103da86315438       coredns-6f6b679f8f-28k2m
	e6f5722727045       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   6b4d32c83825a       coredns-6f6b679f8f-kc84b
	9383508aacb47       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    17 minutes ago      Exited              kindnet-cni               0                   7c496d8d976b0       kindnet-zm75h
	37ced76497679       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      17 minutes ago      Exited              kube-proxy                0                   403a7dadd2cf1       kube-proxy-79txl
	be37bacc58210       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      17 minutes ago      Exited              etcd                      0                   c3627f4eb5471       etcd-ha-597780
	9049789221ccd       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      17 minutes ago      Exited              kube-scheduler            0                   dfba8d4d791ac       kube-scheduler-ha-597780
	
	
	==> coredns [422bd8a4c6f73adcd2455330867e35a1d544ceba09ba70233ba08583d2b5317c] <==
	[INFO] 10.244.2.2:36168 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009915s
	[INFO] 10.244.0.4:54131 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070841s
	[INFO] 10.244.0.4:55620 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000091367s
	[INFO] 10.244.0.4:43235 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075669s
	[INFO] 10.244.1.2:41689 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119685s
	[INFO] 10.244.1.2:59902 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124326s
	[INFO] 10.244.2.2:40926 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109376s
	[INFO] 10.244.2.2:51410 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000177337s
	[INFO] 10.244.0.4:34296 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121681s
	[INFO] 10.244.1.2:46660 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107008s
	[INFO] 10.244.1.2:58922 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127256s
	[INFO] 10.244.1.2:50299 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000110499s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1949&timeout=5m57s&timeoutSeconds=357&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1949&timeout=6m49s&timeoutSeconds=409&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	
	
	==> coredns [96e4820d935853d422990adfe150efcf30cf4f9e5d613b73f919609928c16df7] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e6f5722727045e03073df1bbf73c67fa697d2995cf97bda2806dc43026b8852d] <==
	[INFO] 10.244.2.2:34873 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084958s
	[INFO] 10.244.0.4:38163 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100276s
	[INFO] 10.244.0.4:57638 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133846s
	[INFO] 10.244.0.4:41879 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000064694s
	[INFO] 10.244.1.2:53124 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000175486s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1949&timeout=7m31s&timeoutSeconds=451&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1946&timeout=8m51s&timeoutSeconds=531&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1949&timeout=7m54s&timeoutSeconds=474&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1946": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=1946": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=1949": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[260799391]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 16:35:46.732) (total time: 12311ms):
	Trace[260799391]: ---"Objects listed" error:Unauthorized 12311ms (16:35:59.044)
	Trace[260799391]: [12.311906254s] [12.311906254s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f031e182fbc1d4e970b42cad69f5b0b5bd9c992b61b42337fd35916e56ef8579] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[238329841]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 16:37:45.083) (total time: 10002ms):
	Trace[238329841]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:37:55.084)
	Trace[238329841]: [10.002209092s] [10.002209092s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55108->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1162545651]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (14-Aug-2024 16:37:51.873) (total time: 10440ms):
	Trace[1162545651]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55108->10.96.0.1:443: read: connection reset by peer 10439ms (16:38:02.313)
	Trace[1162545651]: [10.440179041s] [10.440179041s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:55108->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-597780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T16_26_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:25:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:43:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:43:32 +0000   Wed, 14 Aug 2024 16:25:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:43:32 +0000   Wed, 14 Aug 2024 16:25:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:43:32 +0000   Wed, 14 Aug 2024 16:25:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:43:32 +0000   Wed, 14 Aug 2024 16:26:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.4
	  Hostname:    ha-597780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 380f2e1fef9b4a7ba6d1d939cb1bae1a
	  System UUID:                380f2e1f-ef9b-4a7b-a6d1-d939cb1bae1a
	  Boot ID:                    aa55ed43-2220-4096-a571-51cd5b70ed86
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rq7wd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-6f6b679f8f-28k2m             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 coredns-6f6b679f8f-kc84b             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-ha-597780                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-zm75h                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-597780             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-597780    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-79txl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-597780             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-597780                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m7s                   kube-proxy       
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-597780 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-597780 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-597780 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           17m                    node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-597780 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Warning  ContainerGCFailed        6m33s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             6m20s (x2 over 6m45s)  kubelet          Node ha-597780 status is now: NodeNotReady
	  Normal   RegisteredNode           5m11s                  node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Normal   RegisteredNode           5m4s                   node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-597780 event: Registered Node ha-597780 in Controller
	
	
	Name:               ha-597780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T16_27_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:27:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:43:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 16:39:36 +0000   Wed, 14 Aug 2024 16:38:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 16:39:36 +0000   Wed, 14 Aug 2024 16:38:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 16:39:36 +0000   Wed, 14 Aug 2024 16:38:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 16:39:36 +0000   Wed, 14 Aug 2024 16:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-597780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a36bc81f5b549f48c64d8093b0c45f0
	  System UUID:                2a36bc81-f5b5-49f4-8c64-d8093b0c45f0
	  Boot ID:                    40b81862-df95-474f-9bec-f0356bc47e40
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w9lh2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-597780-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-c8f8r                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-597780-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-597780-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-4q2dq                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-597780-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-597780-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m40s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-597780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-597780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-597780-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-597780-m02 status is now: NodeNotReady
	  Normal  Starting                 5m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m36s (x8 over 5m36s)  kubelet          Node ha-597780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s (x8 over 5m36s)  kubelet          Node ha-597780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m36s (x7 over 5m36s)  kubelet          Node ha-597780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m11s                  node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  RegisteredNode           5m4s                   node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-597780-m02 event: Registered Node ha-597780-m02 in Controller
	
	
	Name:               ha-597780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-597780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=ha-597780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T16_29_55_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:29:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-597780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 16:41:06 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 14 Aug 2024 16:40:46 +0000   Wed, 14 Aug 2024 16:41:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 14 Aug 2024 16:40:46 +0000   Wed, 14 Aug 2024 16:41:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 14 Aug 2024 16:40:46 +0000   Wed, 14 Aug 2024 16:41:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 14 Aug 2024 16:40:46 +0000   Wed, 14 Aug 2024 16:41:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.209
	  Hostname:    ha-597780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0fa932f445844ff7a66a64ac6cdf169b
	  System UUID:                0fa932f4-4584-4ff7-a66a-64ac6cdf169b
	  Boot ID:                    b6117a86-4071-4f3c-880b-c8232cde1ee3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7l4cq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 kindnet-5x5s7              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-bmf62           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m42s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-597780-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-597780-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-597780-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal   RegisteredNode           5m11s                  node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal   RegisteredNode           5m4s                   node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal   NodeNotReady             4m31s                  node-controller  Node ha-597780-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-597780-m04 event: Registered Node ha-597780-m04 in Controller
	  Normal   Starting                 2m46s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m46s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m46s                  kubelet          Node ha-597780-m04 has been rebooted, boot id: b6117a86-4071-4f3c-880b-c8232cde1ee3
	  Normal   NodeHasSufficientMemory  2m46s (x2 over 2m46s)  kubelet          Node ha-597780-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x2 over 2m46s)  kubelet          Node ha-597780-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x2 over 2m46s)  kubelet          Node ha-597780-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m46s                  kubelet          Node ha-597780-m04 status is now: NodeReady
	  Normal   NodeNotReady             104s                   node-controller  Node ha-597780-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.613825] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.065926] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069239] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.173403] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.130531] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.250569] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +3.824868] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +3.756438] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.057963] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.054111] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.086455] kauditd_printk_skb: 79 callbacks suppressed
	[Aug14 16:26] kauditd_printk_skb: 62 callbacks suppressed
	[Aug14 16:27] kauditd_printk_skb: 26 callbacks suppressed
	[Aug14 16:37] systemd-fstab-generator[3485]: Ignoring "noauto" option for root device
	[  +0.144486] systemd-fstab-generator[3497]: Ignoring "noauto" option for root device
	[  +0.169121] systemd-fstab-generator[3511]: Ignoring "noauto" option for root device
	[  +0.133515] systemd-fstab-generator[3523]: Ignoring "noauto" option for root device
	[  +0.275861] systemd-fstab-generator[3552]: Ignoring "noauto" option for root device
	[  +0.759588] systemd-fstab-generator[3654]: Ignoring "noauto" option for root device
	[  +3.681325] kauditd_printk_skb: 132 callbacks suppressed
	[ +10.900447] kauditd_printk_skb: 88 callbacks suppressed
	[Aug14 16:38] kauditd_printk_skb: 6 callbacks suppressed
	[ +14.179034] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [78d453751eb78a43af3188f0c9f5c0f9ded6beb22938705c7c95989b7681bc2e] <==
	{"level":"info","ts":"2024-08-14T16:40:11.255602Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:40:11.256282Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7ab0973fa604e492","to":"b8cd3528b7e3c388","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-14T16:40:11.256366Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"warn","ts":"2024-08-14T16:40:13.631099Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b8cd3528b7e3c388","rtt":"0s","error":"dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:13.631175Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b8cd3528b7e3c388","rtt":"0s","error":"dial tcp 192.168.39.167:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-14T16:40:59.373824Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.167:45310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-08-14T16:40:59.388995Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.167:45314","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-14T16:40:59.415269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7ab0973fa604e492 switched to configuration voters=(7257601310133563567 8840732368152355986)"}
	{"level":"info","ts":"2024-08-14T16:40:59.417477Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"6b117bdc86acb526","local-member-id":"7ab0973fa604e492","removed-remote-peer-id":"b8cd3528b7e3c388","removed-remote-peer-urls":["https://192.168.39.167:2380"]}
	{"level":"info","ts":"2024-08-14T16:40:59.417612Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"warn","ts":"2024-08-14T16:40:59.418185Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:40:59.418370Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"warn","ts":"2024-08-14T16:40:59.418590Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:40:59.418643Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:40:59.418722Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"warn","ts":"2024-08-14T16:40:59.418965Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388","error":"context canceled"}
	{"level":"warn","ts":"2024-08-14T16:40:59.419142Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"b8cd3528b7e3c388","error":"failed to read b8cd3528b7e3c388 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-14T16:40:59.419351Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"warn","ts":"2024-08-14T16:40:59.419645Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388","error":"context canceled"}
	{"level":"info","ts":"2024-08-14T16:40:59.419750Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:40:59.419801Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:40:59.419904Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"7ab0973fa604e492","removed-remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:40:59.419964Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"7ab0973fa604e492","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"b8cd3528b7e3c388"}
	{"level":"warn","ts":"2024-08-14T16:40:59.434690Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"7ab0973fa604e492","remote-peer-id-stream-handler":"7ab0973fa604e492","remote-peer-id-from":"b8cd3528b7e3c388"}
	{"level":"warn","ts":"2024-08-14T16:40:59.436265Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.167:59268","server-name":"","error":"EOF"}
	
	
	==> etcd [be37bacc582100ea8cda2f5a0cefaaef29c95c1bc9a887f06bc17e30d7afb905] <==
	{"level":"warn","ts":"2024-08-14T16:36:00.807914Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T16:36:00.012535Z","time spent":"795.373209ms","remote":"127.0.0.1:36734","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:500 "}
	2024/08/14 16:36:00 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-14T16:36:00.860655Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T16:36:00.860709Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-14T16:36:00.862172Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"7ab0973fa604e492","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-14T16:36:00.862375Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862408Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862434Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862528Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862605Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862664Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862695Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"b8cd3528b7e3c388"}
	{"level":"info","ts":"2024-08-14T16:36:00.862719Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.862767Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.862818Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.862908Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.862966Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.863031Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7ab0973fa604e492","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.863066Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64b82df06bebb0af"}
	{"level":"info","ts":"2024-08-14T16:36:00.866691Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"warn","ts":"2024-08-14T16:36:00.866777Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.84983465s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-14T16:36:00.866827Z","caller":"traceutil/trace.go:171","msg":"trace[1941826792] range","detail":"{range_begin:; range_end:; }","duration":"8.849910331s","start":"2024-08-14T16:35:52.016908Z","end":"2024-08-14T16:36:00.866818Z","steps":["trace[1941826792] 'agreement among raft nodes before linearized reading'  (duration: 8.849832588s)"],"step_count":1}
	{"level":"error","ts":"2024-08-14T16:36:00.866862Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-14T16:36:00.866938Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.4:2380"}
	{"level":"info","ts":"2024-08-14T16:36:00.866971Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-597780","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.4:2380"],"advertise-client-urls":["https://192.168.39.4:2379"]}
	
	
	==> kernel <==
	 16:43:32 up 18 min,  0 users,  load average: 0.07, 0.28, 0.25
	Linux ha-597780 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [71c507d68d37b6072cf0b51abc2fff7f57582c574a8ec265020f3676b0d5682f] <==
	I0814 16:42:48.870632       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:42:58.870386       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:42:58.870426       1 main.go:299] handling current node
	I0814 16:42:58.870446       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:42:58.870451       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:42:58.870601       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:42:58.870622       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:43:08.879051       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:43:08.879101       1 main.go:299] handling current node
	I0814 16:43:08.881031       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:43:08.881103       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:43:08.881546       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:43:08.881557       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:43:18.879182       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:43:18.879347       1 main.go:299] handling current node
	I0814 16:43:18.879379       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:43:18.879398       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:43:18.879536       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:43:18.879568       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:43:28.872275       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:43:28.872426       1 main.go:299] handling current node
	I0814 16:43:28.872509       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:43:28.872536       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:43:28.872901       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:43:28.872955       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [9383508aacb4719aed0b7d253b4358ccbfcde5ad0e4a7301771c4634a29ae8e1] <==
	I0814 16:35:37.358034       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:35:37.358193       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:35:37.358403       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:35:37.358434       1 main.go:299] handling current node
	I0814 16:35:37.358478       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:35:37.358495       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:35:37.358573       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:35:37.358602       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	E0814 16:35:44.073856       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1946&timeout=5m12s&timeoutSeconds=312&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	I0814 16:35:47.360336       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:35:47.360394       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	I0814 16:35:47.360598       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:35:47.360620       1 main.go:299] handling current node
	I0814 16:35:47.360632       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:35:47.360637       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:35:47.360690       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:35:47.360706       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	I0814 16:35:57.358160       1 main.go:295] Handling node with IPs: map[192.168.39.4:{}]
	I0814 16:35:57.358253       1 main.go:299] handling current node
	I0814 16:35:57.358272       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0814 16:35:57.358278       1 main.go:322] Node ha-597780-m02 has CIDR [10.244.1.0/24] 
	I0814 16:35:57.358453       1 main.go:295] Handling node with IPs: map[192.168.39.167:{}]
	I0814 16:35:57.358471       1 main.go:322] Node ha-597780-m03 has CIDR [10.244.2.0/24] 
	I0814 16:35:57.358523       1 main.go:295] Handling node with IPs: map[192.168.39.209:{}]
	I0814 16:35:57.358540       1 main.go:322] Node ha-597780-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0b0090111a9078cd7d7114e8e41eba8b0e3e9244a6d56c800001d55c647de047] <==
	I0814 16:38:24.715866       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 16:38:24.722027       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0814 16:38:24.722809       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0814 16:38:24.722976       1 shared_informer.go:320] Caches are synced for configmaps
	I0814 16:38:24.723003       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0814 16:38:24.723079       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0814 16:38:24.723705       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0814 16:38:24.723106       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0814 16:38:24.726296       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0814 16:38:24.727501       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0814 16:38:24.727563       1 aggregator.go:171] initial CRD sync complete...
	I0814 16:38:24.727600       1 autoregister_controller.go:144] Starting autoregister controller
	I0814 16:38:24.727623       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0814 16:38:24.727645       1 cache.go:39] Caches are synced for autoregister controller
	W0814 16:38:24.734580       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.167 192.168.39.225]
	I0814 16:38:24.736082       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 16:38:24.736141       1 policy_source.go:224] refreshing policies
	I0814 16:38:24.804969       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 16:38:24.836589       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 16:38:24.844493       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0814 16:38:24.847453       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0814 16:38:25.622949       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0814 16:38:26.066028       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.167 192.168.39.225 192.168.39.4]
	W0814 16:38:36.061733       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.225 192.168.39.4]
	W0814 16:41:16.070293       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.225 192.168.39.4]
	
	
	==> kube-apiserver [bd1bda5de444ee7b1f76b21acfc57a04e9f13279c7d1c868858a723a1af6d5b0] <==
	I0814 16:37:37.970015       1 options.go:228] external host was not specified, using 192.168.39.4
	I0814 16:37:38.025642       1 server.go:142] Version: v1.31.0
	I0814 16:37:38.025692       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:37:38.820475       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0814 16:37:38.834424       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 16:37:38.845107       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0814 16:37:38.845262       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0814 16:37:38.845923       1 instance.go:232] Using reconciler: lease
	W0814 16:37:58.820468       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0814 16:37:58.820640       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0814 16:37:58.848815       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [047dd2746b2ff4c8d2e079bf9e0be2e3f51cb4e115f58578ac5fc150d0b5ec89] <==
	I0814 16:37:38.899600       1 serving.go:386] Generated self-signed cert in-memory
	I0814 16:37:39.390494       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0814 16:37:39.390528       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:37:39.392316       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0814 16:37:39.392452       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0814 16:37:39.392944       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 16:37:39.393012       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0814 16:37:59.855358       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.4:8443/healthz\": dial tcp 192.168.39.4:8443: connect: connection refused"
	
	
	==> kube-controller-manager [0feebc4c91acc20973f940c45d9b14cd44c58400f983e72d31ca4be3ec4fd4b1] <==
	I0814 16:40:58.222202       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.137µs"
	I0814 16:40:58.923194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="110.896µs"
	I0814 16:40:58.930918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.426µs"
	I0814 16:41:01.706032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.124075ms"
	I0814 16:41:01.706787       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.007µs"
	I0814 16:41:10.240884       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-597780-m04"
	I0814 16:41:10.242079       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m03"
	E0814 16:41:10.298054       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-597780-m03\", UID:\"4695d653-42cd-4819-9f8b-5dca70767fbc\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32
{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-597780-m03\", UID:\"01cfcc6b-78dc-4408-8feb-7ab7a9536efe\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-597780-m03\" not found" logger="UnhandledError"
	E0814 16:41:28.006398       1 gc_controller.go:151] "Failed to get node" err="node \"ha-597780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-597780-m03"
	E0814 16:41:28.006488       1 gc_controller.go:151] "Failed to get node" err="node \"ha-597780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-597780-m03"
	E0814 16:41:28.006497       1 gc_controller.go:151] "Failed to get node" err="node \"ha-597780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-597780-m03"
	E0814 16:41:28.006502       1 gc_controller.go:151] "Failed to get node" err="node \"ha-597780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-597780-m03"
	E0814 16:41:28.006507       1 gc_controller.go:151] "Failed to get node" err="node \"ha-597780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-597780-m03"
	E0814 16:41:48.007385       1 gc_controller.go:151] "Failed to get node" err="node \"ha-597780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-597780-m03"
	E0814 16:41:48.007557       1 gc_controller.go:151] "Failed to get node" err="node \"ha-597780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-597780-m03"
	E0814 16:41:48.007584       1 gc_controller.go:151] "Failed to get node" err="node \"ha-597780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-597780-m03"
	E0814 16:41:48.007608       1 gc_controller.go:151] "Failed to get node" err="node \"ha-597780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-597780-m03"
	E0814 16:41:48.007631       1 gc_controller.go:151] "Failed to get node" err="node \"ha-597780-m03\" not found" logger="pod-garbage-collector-controller" node="ha-597780-m03"
	I0814 16:41:48.274897       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:41:48.296471       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:41:48.343288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.092304ms"
	I0814 16:41:48.343550       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="82.439µs"
	I0814 16:41:51.261419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:41:53.413109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780-m04"
	I0814 16:43:32.050055       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-597780"
	
	
	==> kube-proxy [37ced764976790109b4f733c5123edcf3f4f65a61abb8c45adbbb307eaf75eeb] <==
	E0814 16:34:49.930498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:34:49.930596       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:34:49.930810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:34:49.932409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:34:49.932738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:34:56.073720       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:34:56.074147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:34:56.073984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:34:56.074379       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:34:56.074051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:34:56.074488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:05.289601       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:05.289782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:05.289656       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:05.289953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:08.360934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:08.361036       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:26.794516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:26.794584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1935\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:26.794668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:26.794698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-597780&resourceVersion=1913\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:26.794803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:26.794910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0814 16:35:57.514049       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838": dial tcp 192.168.39.254:8443: connect: no route to host
	E0814 16:35:57.514432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1838\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [bd4f3f03c5946821483db35d82adadf94e716c80acefdfa9b86eeca5126ebdea] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 16:37:41.960693       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-597780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0814 16:37:45.034507       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-597780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0814 16:37:48.104646       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-597780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0814 16:37:54.249582       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-597780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0814 16:38:03.465892       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-597780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0814 16:38:24.968712       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-597780\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0814 16:38:24.968820       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0814 16:38:24.968922       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 16:38:25.017893       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 16:38:25.018012       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 16:38:25.018055       1 server_linux.go:169] "Using iptables Proxier"
	I0814 16:38:25.021020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 16:38:25.021559       1 server.go:483] "Version info" version="v1.31.0"
	I0814 16:38:25.022269       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:38:25.025576       1 config.go:197] "Starting service config controller"
	I0814 16:38:25.025660       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 16:38:25.025805       1 config.go:104] "Starting endpoint slice config controller"
	I0814 16:38:25.025826       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 16:38:25.026869       1 config.go:326] "Starting node config controller"
	I0814 16:38:25.026892       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 16:38:25.126564       1 shared_informer.go:320] Caches are synced for service config
	I0814 16:38:25.126777       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 16:38:25.127915       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [804f82521456895dcd96d833472a98b47f70324216f760e52a3f5d261531298f] <==
	W0814 16:38:16.685743       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.4:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:16.685914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.4:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:16.841534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.4:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:16.841641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.4:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:17.221011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.4:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:17.221115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.4:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:18.663352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.4:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:18.663469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.4:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:19.791843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.4:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:19.791894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.4:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:19.956861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.4:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:19.956984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.4:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:20.271610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.4:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:20.271672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.4:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:21.087552       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.4:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:21.087676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.4:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:21.190590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.4:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:21.190660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.4:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	W0814 16:38:21.931308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.4:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.4:8443: connect: connection refused
	E0814 16:38:21.931369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.4:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.4:8443: connect: connection refused" logger="UnhandledError"
	I0814 16:38:33.364665       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0814 16:40:56.147850       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-p6nj4\": pod busybox-7dff88458-p6nj4 is already assigned to node \"ha-597780-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-p6nj4" node="ha-597780-m04"
	E0814 16:40:56.151051       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 22090ef5-afcc-4413-bed2-f267247c0a10(default/busybox-7dff88458-p6nj4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-p6nj4"
	E0814 16:40:56.151177       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-p6nj4\": pod busybox-7dff88458-p6nj4 is already assigned to node \"ha-597780-m04\"" pod="default/busybox-7dff88458-p6nj4"
	I0814 16:40:56.151322       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-p6nj4" node="ha-597780-m04"
	
	
	==> kube-scheduler [9049789221ccd20ac23b00f47bf79f1d702bee7108e1a1afdc6692558f81b59e] <==
	E0814 16:29:14.513586       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d61c6e28-3a9c-47b5-ad97-6d1c77c30857(default/busybox-7dff88458-w9lh2) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-w9lh2"
	E0814 16:29:14.513669       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-w9lh2\": pod busybox-7dff88458-w9lh2 is already assigned to node \"ha-597780-m02\"" pod="default/busybox-7dff88458-w9lh2"
	I0814 16:29:14.513886       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-w9lh2" node="ha-597780-m02"
	E0814 16:29:14.544849       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-27k42\": pod busybox-7dff88458-27k42 is already assigned to node \"ha-597780-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-27k42" node="ha-597780-m03"
	E0814 16:29:14.544959       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-27k42\": pod busybox-7dff88458-27k42 is already assigned to node \"ha-597780-m03\"" pod="default/busybox-7dff88458-27k42"
	E0814 16:29:14.545719       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rq7wd\": pod busybox-7dff88458-rq7wd is already assigned to node \"ha-597780\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-rq7wd" node="ha-597780"
	E0814 16:29:14.557325       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-rq7wd\": pod busybox-7dff88458-rq7wd is already assigned to node \"ha-597780\"" pod="default/busybox-7dff88458-rq7wd"
	E0814 16:29:54.657005       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5x5s7\": pod kindnet-5x5s7 is already assigned to node \"ha-597780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5x5s7" node="ha-597780-m04"
	E0814 16:29:54.657112       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 45af1890-2443-48af-a4f1-38ce0ab0f558(kube-system/kindnet-5x5s7) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5x5s7"
	E0814 16:29:54.657139       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5x5s7\": pod kindnet-5x5s7 is already assigned to node \"ha-597780-m04\"" pod="kube-system/kindnet-5x5s7"
	I0814 16:29:54.657164       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5x5s7" node="ha-597780-m04"
	E0814 16:35:52.111972       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0814 16:35:52.252623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0814 16:35:53.591543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0814 16:35:54.659904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0814 16:35:55.135355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0814 16:35:55.551945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0814 16:35:56.194429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0814 16:35:56.452057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0814 16:35:57.575003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0814 16:35:57.601780       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0814 16:35:57.652631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0814 16:35:59.238461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0814 16:36:00.196272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0814 16:36:00.771517       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 14 16:42:00 ha-597780 kubelet[1315]: E0814 16:42:00.121816    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653720121469738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:42:00 ha-597780 kubelet[1315]: E0814 16:42:00.121851    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653720121469738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:42:10 ha-597780 kubelet[1315]: E0814 16:42:10.124199    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653730123843364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:42:10 ha-597780 kubelet[1315]: E0814 16:42:10.124574    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653730123843364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:42:20 ha-597780 kubelet[1315]: E0814 16:42:20.127280    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653740126351017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:42:20 ha-597780 kubelet[1315]: E0814 16:42:20.127744    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653740126351017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:42:30 ha-597780 kubelet[1315]: E0814 16:42:30.130204    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653750129458203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:42:30 ha-597780 kubelet[1315]: E0814 16:42:30.130281    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653750129458203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:42:40 ha-597780 kubelet[1315]: E0814 16:42:40.132742    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653760132264183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:42:40 ha-597780 kubelet[1315]: E0814 16:42:40.132798    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653760132264183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:42:50 ha-597780 kubelet[1315]: E0814 16:42:50.134635    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653770134118762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:42:50 ha-597780 kubelet[1315]: E0814 16:42:50.134950    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653770134118762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:42:59 ha-597780 kubelet[1315]: E0814 16:42:59.872486    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 16:42:59 ha-597780 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 16:42:59 ha-597780 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 16:42:59 ha-597780 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 16:42:59 ha-597780 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 16:43:00 ha-597780 kubelet[1315]: E0814 16:43:00.137342    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653780136880011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:43:00 ha-597780 kubelet[1315]: E0814 16:43:00.137378    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653780136880011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:43:10 ha-597780 kubelet[1315]: E0814 16:43:10.139146    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653790138794607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:43:10 ha-597780 kubelet[1315]: E0814 16:43:10.139585    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653790138794607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:43:20 ha-597780 kubelet[1315]: E0814 16:43:20.144026    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653800141840942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:43:20 ha-597780 kubelet[1315]: E0814 16:43:20.144381    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653800141840942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:43:30 ha-597780 kubelet[1315]: E0814 16:43:30.146854    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653810146160057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 16:43:30 ha-597780 kubelet[1315]: E0814 16:43:30.147281    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723653810146160057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 16:43:31.880446   40900 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19446-13977/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-597780 -n ha-597780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-597780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (321.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-986999
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-986999
E0814 16:59:29.460180   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-986999: exit status 82 (2m1.751969803s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-986999-m03"  ...
	* Stopping node "multinode-986999-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-986999" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-986999 --wait=true -v=8 --alsologtostderr
E0814 17:02:32.527733   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:03:02.589405   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-986999 --wait=true -v=8 --alsologtostderr: (3m17.268391328s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-986999
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-986999 -n multinode-986999
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-986999 logs -n 25: (1.44883122s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp multinode-986999-m02:/home/docker/cp-test.txt                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3655799611/001/cp-test_multinode-986999-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp multinode-986999-m02:/home/docker/cp-test.txt                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999:/home/docker/cp-test_multinode-986999-m02_multinode-986999.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n multinode-986999 sudo cat                                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | /home/docker/cp-test_multinode-986999-m02_multinode-986999.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp multinode-986999-m02:/home/docker/cp-test.txt                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m03:/home/docker/cp-test_multinode-986999-m02_multinode-986999-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n multinode-986999-m03 sudo cat                                   | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | /home/docker/cp-test_multinode-986999-m02_multinode-986999-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp testdata/cp-test.txt                                                | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp multinode-986999-m03:/home/docker/cp-test.txt                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3655799611/001/cp-test_multinode-986999-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp multinode-986999-m03:/home/docker/cp-test.txt                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999:/home/docker/cp-test_multinode-986999-m03_multinode-986999.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n multinode-986999 sudo cat                                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | /home/docker/cp-test_multinode-986999-m03_multinode-986999.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp multinode-986999-m03:/home/docker/cp-test.txt                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m02:/home/docker/cp-test_multinode-986999-m03_multinode-986999-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n multinode-986999-m02 sudo cat                                   | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | /home/docker/cp-test_multinode-986999-m03_multinode-986999-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-986999 node stop m03                                                          | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	| node    | multinode-986999 node start                                                             | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:58 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-986999                                                                | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:58 UTC |                     |
	| stop    | -p multinode-986999                                                                     | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:58 UTC |                     |
	| start   | -p multinode-986999                                                                     | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 17:00 UTC | 14 Aug 24 17:03 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-986999                                                                | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 17:03 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 17:00:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 17:00:25.035973   50203 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:00:25.036221   50203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:00:25.036230   50203 out.go:304] Setting ErrFile to fd 2...
	I0814 17:00:25.036237   50203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:00:25.036454   50203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:00:25.037052   50203 out.go:298] Setting JSON to false
	I0814 17:00:25.037979   50203 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6169,"bootTime":1723648656,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:00:25.038041   50203 start.go:139] virtualization: kvm guest
	I0814 17:00:25.040189   50203 out.go:177] * [multinode-986999] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:00:25.041435   50203 notify.go:220] Checking for updates...
	I0814 17:00:25.041472   50203 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:00:25.042824   50203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:00:25.044184   50203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:00:25.045328   50203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:00:25.046521   50203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:00:25.047826   50203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:00:25.049406   50203 config.go:182] Loaded profile config "multinode-986999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:00:25.049512   50203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:00:25.049955   50203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:00:25.050003   50203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:00:25.066222   50203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36659
	I0814 17:00:25.066699   50203 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:00:25.067252   50203 main.go:141] libmachine: Using API Version  1
	I0814 17:00:25.067281   50203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:00:25.067695   50203 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:00:25.067916   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:00:25.103209   50203 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 17:00:25.104408   50203 start.go:297] selected driver: kvm2
	I0814 17:00:25.104426   50203 start.go:901] validating driver "kvm2" against &{Name:multinode-986999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-986999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:00:25.104563   50203 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:00:25.104903   50203 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:00:25.104975   50203 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:00:25.120177   50203 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:00:25.121297   50203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:00:25.121356   50203 cni.go:84] Creating CNI manager for ""
	I0814 17:00:25.121368   50203 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0814 17:00:25.121431   50203 start.go:340] cluster config:
	{Name:multinode-986999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-986999 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:00:25.121577   50203 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:00:25.123425   50203 out.go:177] * Starting "multinode-986999" primary control-plane node in "multinode-986999" cluster
	I0814 17:00:25.124732   50203 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:00:25.124767   50203 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:00:25.124782   50203 cache.go:56] Caching tarball of preloaded images
	I0814 17:00:25.124889   50203 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:00:25.124903   50203 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 17:00:25.125024   50203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/config.json ...
	I0814 17:00:25.125231   50203 start.go:360] acquireMachinesLock for multinode-986999: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:00:25.125278   50203 start.go:364] duration metric: took 29.372µs to acquireMachinesLock for "multinode-986999"
	I0814 17:00:25.125300   50203 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:00:25.125314   50203 fix.go:54] fixHost starting: 
	I0814 17:00:25.125585   50203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:00:25.125620   50203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:00:25.139754   50203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0814 17:00:25.140256   50203 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:00:25.140841   50203 main.go:141] libmachine: Using API Version  1
	I0814 17:00:25.140868   50203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:00:25.141153   50203 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:00:25.141354   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:00:25.141647   50203 main.go:141] libmachine: (multinode-986999) Calling .GetState
	I0814 17:00:25.143192   50203 fix.go:112] recreateIfNeeded on multinode-986999: state=Running err=<nil>
	W0814 17:00:25.143217   50203 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:00:25.145217   50203 out.go:177] * Updating the running kvm2 "multinode-986999" VM ...
	I0814 17:00:25.146456   50203 machine.go:94] provisionDockerMachine start ...
	I0814 17:00:25.146483   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:00:25.146694   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:00:25.149148   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.149626   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.149657   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.149848   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:00:25.150011   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.150161   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.150304   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:00:25.150455   50203 main.go:141] libmachine: Using SSH client type: native
	I0814 17:00:25.150730   50203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0814 17:00:25.150747   50203 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:00:25.260389   50203 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-986999
	
	I0814 17:00:25.260422   50203 main.go:141] libmachine: (multinode-986999) Calling .GetMachineName
	I0814 17:00:25.260783   50203 buildroot.go:166] provisioning hostname "multinode-986999"
	I0814 17:00:25.260813   50203 main.go:141] libmachine: (multinode-986999) Calling .GetMachineName
	I0814 17:00:25.261016   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:00:25.263795   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.264221   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.264261   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.264370   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:00:25.264561   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.264707   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.264828   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:00:25.265003   50203 main.go:141] libmachine: Using SSH client type: native
	I0814 17:00:25.265176   50203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0814 17:00:25.265189   50203 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-986999 && echo "multinode-986999" | sudo tee /etc/hostname
	I0814 17:00:25.387512   50203 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-986999
	
	I0814 17:00:25.387549   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:00:25.390615   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.391038   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.391069   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.391245   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:00:25.391439   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.391551   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.391694   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:00:25.391854   50203 main.go:141] libmachine: Using SSH client type: native
	I0814 17:00:25.392045   50203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0814 17:00:25.392070   50203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-986999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-986999/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-986999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:00:25.499936   50203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:00:25.499998   50203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:00:25.500058   50203 buildroot.go:174] setting up certificates
	I0814 17:00:25.500071   50203 provision.go:84] configureAuth start
	I0814 17:00:25.500089   50203 main.go:141] libmachine: (multinode-986999) Calling .GetMachineName
	I0814 17:00:25.500377   50203 main.go:141] libmachine: (multinode-986999) Calling .GetIP
	I0814 17:00:25.502738   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.503033   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.503062   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.503215   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:00:25.505716   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.506023   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.506060   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.506221   50203 provision.go:143] copyHostCerts
	I0814 17:00:25.506256   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:00:25.506305   50203 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:00:25.506318   50203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:00:25.506385   50203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:00:25.506481   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:00:25.506500   50203 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:00:25.506504   50203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:00:25.506529   50203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:00:25.506591   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:00:25.506607   50203 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:00:25.506612   50203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:00:25.506641   50203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:00:25.506691   50203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.multinode-986999 san=[127.0.0.1 192.168.39.36 localhost minikube multinode-986999]
	I0814 17:00:25.783493   50203 provision.go:177] copyRemoteCerts
	I0814 17:00:25.783554   50203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:00:25.783581   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:00:25.786295   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.786646   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.786676   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.786877   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:00:25.787080   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.787237   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:00:25.787379   50203 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/multinode-986999/id_rsa Username:docker}
	I0814 17:00:25.871094   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 17:00:25.871184   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:00:25.898646   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 17:00:25.898721   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0814 17:00:25.924679   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 17:00:25.924772   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:00:25.946983   50203 provision.go:87] duration metric: took 446.898202ms to configureAuth
	I0814 17:00:25.947007   50203 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:00:25.947219   50203 config.go:182] Loaded profile config "multinode-986999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:00:25.947295   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:00:25.949721   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.950091   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.950124   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.950269   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:00:25.950534   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.950689   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.950810   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:00:25.950967   50203 main.go:141] libmachine: Using SSH client type: native
	I0814 17:00:25.951163   50203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0814 17:00:25.951186   50203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:01:56.585590   50203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:01:56.585616   50203 machine.go:97] duration metric: took 1m31.439141262s to provisionDockerMachine
	I0814 17:01:56.585636   50203 start.go:293] postStartSetup for "multinode-986999" (driver="kvm2")
	I0814 17:01:56.585646   50203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:01:56.585661   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:01:56.585996   50203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:01:56.586049   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:01:56.589370   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.589874   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:01:56.589898   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.590071   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:01:56.590270   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:01:56.590430   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:01:56.590578   50203 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/multinode-986999/id_rsa Username:docker}
	I0814 17:01:56.675273   50203 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:01:56.679571   50203 command_runner.go:130] > NAME=Buildroot
	I0814 17:01:56.679591   50203 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0814 17:01:56.679598   50203 command_runner.go:130] > ID=buildroot
	I0814 17:01:56.679605   50203 command_runner.go:130] > VERSION_ID=2023.02.9
	I0814 17:01:56.679614   50203 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0814 17:01:56.679697   50203 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:01:56.679723   50203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:01:56.679807   50203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:01:56.679913   50203 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:01:56.679926   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /etc/ssl/certs/211772.pem
	I0814 17:01:56.680074   50203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:01:56.688797   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:01:56.711346   50203 start.go:296] duration metric: took 125.697537ms for postStartSetup
	I0814 17:01:56.711396   50203 fix.go:56] duration metric: took 1m31.586084899s for fixHost
	I0814 17:01:56.711440   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:01:56.714369   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.714807   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:01:56.714840   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.715094   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:01:56.715308   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:01:56.715542   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:01:56.715704   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:01:56.715926   50203 main.go:141] libmachine: Using SSH client type: native
	I0814 17:01:56.716124   50203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0814 17:01:56.716136   50203 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:01:56.819457   50203 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723654916.793494287
	
	I0814 17:01:56.819477   50203 fix.go:216] guest clock: 1723654916.793494287
	I0814 17:01:56.819486   50203 fix.go:229] Guest: 2024-08-14 17:01:56.793494287 +0000 UTC Remote: 2024-08-14 17:01:56.711401758 +0000 UTC m=+91.710240206 (delta=82.092529ms)
	I0814 17:01:56.819547   50203 fix.go:200] guest clock delta is within tolerance: 82.092529ms
	I0814 17:01:56.819555   50203 start.go:83] releasing machines lock for "multinode-986999", held for 1m31.69426376s
	I0814 17:01:56.819712   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:01:56.820013   50203 main.go:141] libmachine: (multinode-986999) Calling .GetIP
	I0814 17:01:56.822586   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.822954   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:01:56.822977   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.823147   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:01:56.823786   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:01:56.823950   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:01:56.824040   50203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:01:56.824100   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:01:56.824163   50203 ssh_runner.go:195] Run: cat /version.json
	I0814 17:01:56.824188   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:01:56.826750   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.827101   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:01:56.827129   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.827165   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.827257   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:01:56.827440   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:01:56.827615   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:01:56.827698   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:01:56.827722   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.827783   50203 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/multinode-986999/id_rsa Username:docker}
	I0814 17:01:56.827911   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:01:56.828078   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:01:56.828220   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:01:56.828355   50203 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/multinode-986999/id_rsa Username:docker}
	I0814 17:01:56.942928   50203 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0814 17:01:56.942973   50203 command_runner.go:130] > {"iso_version": "v1.33.1-1723567878-19429", "kicbase_version": "v0.0.44-1723026928-19389", "minikube_version": "v1.33.1", "commit": "99323a71d52eff08226c75fcaff04297eb5d3584"}
	I0814 17:01:56.943120   50203 ssh_runner.go:195] Run: systemctl --version
	I0814 17:01:56.948890   50203 command_runner.go:130] > systemd 252 (252)
	I0814 17:01:56.948924   50203 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0814 17:01:56.948973   50203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:01:57.103505   50203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0814 17:01:57.110190   50203 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0814 17:01:57.110460   50203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:01:57.110524   50203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:01:57.119349   50203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0814 17:01:57.119371   50203 start.go:495] detecting cgroup driver to use...
	I0814 17:01:57.119435   50203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:01:57.136765   50203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:01:57.150130   50203 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:01:57.150188   50203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:01:57.163754   50203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:01:57.176624   50203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:01:57.329437   50203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:01:57.477998   50203 docker.go:233] disabling docker service ...
	I0814 17:01:57.478080   50203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:01:57.494616   50203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:01:57.508171   50203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:01:57.643664   50203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:01:57.777641   50203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:01:57.791089   50203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:01:57.823066   50203 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0814 17:01:57.823123   50203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:01:57.823164   50203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.833292   50203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:01:57.833388   50203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.842912   50203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.852307   50203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.861662   50203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:01:57.871552   50203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.881077   50203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.891338   50203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.901348   50203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:01:57.914134   50203 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0814 17:01:57.914269   50203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:01:57.923005   50203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:01:58.060734   50203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:01:58.285074   50203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:01:58.285146   50203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:01:58.289694   50203 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0814 17:01:58.289722   50203 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0814 17:01:58.289730   50203 command_runner.go:130] > Device: 0,22	Inode: 1333        Links: 1
	I0814 17:01:58.289740   50203 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0814 17:01:58.289751   50203 command_runner.go:130] > Access: 2024-08-14 17:01:58.160329986 +0000
	I0814 17:01:58.289760   50203 command_runner.go:130] > Modify: 2024-08-14 17:01:58.160329986 +0000
	I0814 17:01:58.289772   50203 command_runner.go:130] > Change: 2024-08-14 17:01:58.160329986 +0000
	I0814 17:01:58.289778   50203 command_runner.go:130] >  Birth: -
	I0814 17:01:58.290018   50203 start.go:563] Will wait 60s for crictl version
	I0814 17:01:58.290065   50203 ssh_runner.go:195] Run: which crictl
	I0814 17:01:58.293348   50203 command_runner.go:130] > /usr/bin/crictl
	I0814 17:01:58.293425   50203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:01:58.331476   50203 command_runner.go:130] > Version:  0.1.0
	I0814 17:01:58.331505   50203 command_runner.go:130] > RuntimeName:  cri-o
	I0814 17:01:58.331510   50203 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0814 17:01:58.331515   50203 command_runner.go:130] > RuntimeApiVersion:  v1
	I0814 17:01:58.331525   50203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:01:58.331589   50203 ssh_runner.go:195] Run: crio --version
	I0814 17:01:58.358116   50203 command_runner.go:130] > crio version 1.29.1
	I0814 17:01:58.358142   50203 command_runner.go:130] > Version:        1.29.1
	I0814 17:01:58.358148   50203 command_runner.go:130] > GitCommit:      unknown
	I0814 17:01:58.358153   50203 command_runner.go:130] > GitCommitDate:  unknown
	I0814 17:01:58.358157   50203 command_runner.go:130] > GitTreeState:   clean
	I0814 17:01:58.358163   50203 command_runner.go:130] > BuildDate:      2024-08-13T22:49:54Z
	I0814 17:01:58.358167   50203 command_runner.go:130] > GoVersion:      go1.21.6
	I0814 17:01:58.358171   50203 command_runner.go:130] > Compiler:       gc
	I0814 17:01:58.358176   50203 command_runner.go:130] > Platform:       linux/amd64
	I0814 17:01:58.358180   50203 command_runner.go:130] > Linkmode:       dynamic
	I0814 17:01:58.358184   50203 command_runner.go:130] > BuildTags:      
	I0814 17:01:58.358191   50203 command_runner.go:130] >   containers_image_ostree_stub
	I0814 17:01:58.358197   50203 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0814 17:01:58.358203   50203 command_runner.go:130] >   btrfs_noversion
	I0814 17:01:58.358213   50203 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0814 17:01:58.358220   50203 command_runner.go:130] >   libdm_no_deferred_remove
	I0814 17:01:58.358230   50203 command_runner.go:130] >   seccomp
	I0814 17:01:58.358236   50203 command_runner.go:130] > LDFlags:          unknown
	I0814 17:01:58.358240   50203 command_runner.go:130] > SeccompEnabled:   true
	I0814 17:01:58.358244   50203 command_runner.go:130] > AppArmorEnabled:  false
	I0814 17:01:58.358339   50203 ssh_runner.go:195] Run: crio --version
	I0814 17:01:58.384810   50203 command_runner.go:130] > crio version 1.29.1
	I0814 17:01:58.384837   50203 command_runner.go:130] > Version:        1.29.1
	I0814 17:01:58.384846   50203 command_runner.go:130] > GitCommit:      unknown
	I0814 17:01:58.384851   50203 command_runner.go:130] > GitCommitDate:  unknown
	I0814 17:01:58.384855   50203 command_runner.go:130] > GitTreeState:   clean
	I0814 17:01:58.384860   50203 command_runner.go:130] > BuildDate:      2024-08-13T22:49:54Z
	I0814 17:01:58.384864   50203 command_runner.go:130] > GoVersion:      go1.21.6
	I0814 17:01:58.384871   50203 command_runner.go:130] > Compiler:       gc
	I0814 17:01:58.384878   50203 command_runner.go:130] > Platform:       linux/amd64
	I0814 17:01:58.384890   50203 command_runner.go:130] > Linkmode:       dynamic
	I0814 17:01:58.384901   50203 command_runner.go:130] > BuildTags:      
	I0814 17:01:58.384908   50203 command_runner.go:130] >   containers_image_ostree_stub
	I0814 17:01:58.384914   50203 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0814 17:01:58.384920   50203 command_runner.go:130] >   btrfs_noversion
	I0814 17:01:58.384924   50203 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0814 17:01:58.384929   50203 command_runner.go:130] >   libdm_no_deferred_remove
	I0814 17:01:58.384932   50203 command_runner.go:130] >   seccomp
	I0814 17:01:58.384936   50203 command_runner.go:130] > LDFlags:          unknown
	I0814 17:01:58.384940   50203 command_runner.go:130] > SeccompEnabled:   true
	I0814 17:01:58.384944   50203 command_runner.go:130] > AppArmorEnabled:  false
	I0814 17:01:58.388048   50203 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:01:58.389470   50203 main.go:141] libmachine: (multinode-986999) Calling .GetIP
	I0814 17:01:58.392357   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:58.392717   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:01:58.392746   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:58.392954   50203 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 17:01:58.397347   50203 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0814 17:01:58.397462   50203 kubeadm.go:883] updating cluster {Name:multinode-986999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-986999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:01:58.397618   50203 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:01:58.397685   50203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:01:58.436210   50203 command_runner.go:130] > {
	I0814 17:01:58.436243   50203 command_runner.go:130] >   "images": [
	I0814 17:01:58.436249   50203 command_runner.go:130] >     {
	I0814 17:01:58.436264   50203 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0814 17:01:58.436272   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.436281   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0814 17:01:58.436286   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436293   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.436337   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0814 17:01:58.436354   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0814 17:01:58.436361   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436370   50203 command_runner.go:130] >       "size": "87165492",
	I0814 17:01:58.436378   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.436386   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.436397   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.436407   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.436414   50203 command_runner.go:130] >     },
	I0814 17:01:58.436421   50203 command_runner.go:130] >     {
	I0814 17:01:58.436433   50203 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0814 17:01:58.436440   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.436450   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0814 17:01:58.436458   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436466   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.436480   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0814 17:01:58.436493   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0814 17:01:58.436500   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436507   50203 command_runner.go:130] >       "size": "87190579",
	I0814 17:01:58.436515   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.436526   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.436537   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.436545   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.436558   50203 command_runner.go:130] >     },
	I0814 17:01:58.436566   50203 command_runner.go:130] >     {
	I0814 17:01:58.436578   50203 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0814 17:01:58.436586   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.436595   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0814 17:01:58.436603   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436611   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.436624   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0814 17:01:58.436638   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0814 17:01:58.436646   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436655   50203 command_runner.go:130] >       "size": "1363676",
	I0814 17:01:58.436662   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.436670   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.436678   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.436686   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.436699   50203 command_runner.go:130] >     },
	I0814 17:01:58.436706   50203 command_runner.go:130] >     {
	I0814 17:01:58.436720   50203 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0814 17:01:58.436731   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.436743   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0814 17:01:58.436751   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436760   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.436777   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0814 17:01:58.436798   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0814 17:01:58.436808   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436816   50203 command_runner.go:130] >       "size": "31470524",
	I0814 17:01:58.436825   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.436835   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.436845   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.436856   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.436863   50203 command_runner.go:130] >     },
	I0814 17:01:58.436869   50203 command_runner.go:130] >     {
	I0814 17:01:58.436881   50203 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0814 17:01:58.436891   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.436901   50203 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0814 17:01:58.436908   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436916   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.436932   50203 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0814 17:01:58.436947   50203 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0814 17:01:58.436956   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436964   50203 command_runner.go:130] >       "size": "61245718",
	I0814 17:01:58.436975   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.436983   50203 command_runner.go:130] >       "username": "nonroot",
	I0814 17:01:58.436993   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.437003   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.437013   50203 command_runner.go:130] >     },
	I0814 17:01:58.437020   50203 command_runner.go:130] >     {
	I0814 17:01:58.437031   50203 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0814 17:01:58.437042   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.437053   50203 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0814 17:01:58.437063   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437071   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.437086   50203 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0814 17:01:58.437101   50203 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0814 17:01:58.437109   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437117   50203 command_runner.go:130] >       "size": "149009664",
	I0814 17:01:58.437128   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.437136   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.437145   50203 command_runner.go:130] >       },
	I0814 17:01:58.437153   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.437164   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.437172   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.437181   50203 command_runner.go:130] >     },
	I0814 17:01:58.437189   50203 command_runner.go:130] >     {
	I0814 17:01:58.437200   50203 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0814 17:01:58.437206   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.437215   50203 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0814 17:01:58.437225   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437233   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.437250   50203 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0814 17:01:58.437265   50203 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0814 17:01:58.437274   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437283   50203 command_runner.go:130] >       "size": "95233506",
	I0814 17:01:58.437293   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.437306   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.437315   50203 command_runner.go:130] >       },
	I0814 17:01:58.437323   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.437333   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.437340   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.437348   50203 command_runner.go:130] >     },
	I0814 17:01:58.437358   50203 command_runner.go:130] >     {
	I0814 17:01:58.437370   50203 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0814 17:01:58.437381   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.437393   50203 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0814 17:01:58.437402   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437410   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.437437   50203 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0814 17:01:58.437454   50203 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0814 17:01:58.437461   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437469   50203 command_runner.go:130] >       "size": "89437512",
	I0814 17:01:58.437478   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.437488   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.437496   50203 command_runner.go:130] >       },
	I0814 17:01:58.437504   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.437547   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.437561   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.437568   50203 command_runner.go:130] >     },
	I0814 17:01:58.437575   50203 command_runner.go:130] >     {
	I0814 17:01:58.437586   50203 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0814 17:01:58.437594   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.437603   50203 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0814 17:01:58.437610   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437618   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.437631   50203 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0814 17:01:58.437643   50203 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0814 17:01:58.437651   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437661   50203 command_runner.go:130] >       "size": "92728217",
	I0814 17:01:58.437671   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.437682   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.437690   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.437701   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.437709   50203 command_runner.go:130] >     },
	I0814 17:01:58.437716   50203 command_runner.go:130] >     {
	I0814 17:01:58.437730   50203 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0814 17:01:58.437741   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.437751   50203 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0814 17:01:58.437762   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437773   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.437791   50203 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0814 17:01:58.437806   50203 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0814 17:01:58.437813   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437824   50203 command_runner.go:130] >       "size": "68420936",
	I0814 17:01:58.437835   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.437843   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.437851   50203 command_runner.go:130] >       },
	I0814 17:01:58.437860   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.437870   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.437877   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.437887   50203 command_runner.go:130] >     },
	I0814 17:01:58.437894   50203 command_runner.go:130] >     {
	I0814 17:01:58.437906   50203 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0814 17:01:58.437916   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.437925   50203 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0814 17:01:58.437934   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437942   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.437957   50203 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0814 17:01:58.437973   50203 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0814 17:01:58.437983   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437992   50203 command_runner.go:130] >       "size": "742080",
	I0814 17:01:58.438002   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.438010   50203 command_runner.go:130] >         "value": "65535"
	I0814 17:01:58.438020   50203 command_runner.go:130] >       },
	I0814 17:01:58.438029   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.438039   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.438047   50203 command_runner.go:130] >       "pinned": true
	I0814 17:01:58.438056   50203 command_runner.go:130] >     }
	I0814 17:01:58.438063   50203 command_runner.go:130] >   ]
	I0814 17:01:58.438072   50203 command_runner.go:130] > }
	I0814 17:01:58.438263   50203 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:01:58.438278   50203 crio.go:433] Images already preloaded, skipping extraction
	I0814 17:01:58.438356   50203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:01:58.468720   50203 command_runner.go:130] > {
	I0814 17:01:58.468745   50203 command_runner.go:130] >   "images": [
	I0814 17:01:58.468751   50203 command_runner.go:130] >     {
	I0814 17:01:58.468764   50203 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0814 17:01:58.468770   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.468778   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0814 17:01:58.468783   50203 command_runner.go:130] >       ],
	I0814 17:01:58.468788   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.468801   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0814 17:01:58.468813   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0814 17:01:58.468822   50203 command_runner.go:130] >       ],
	I0814 17:01:58.468830   50203 command_runner.go:130] >       "size": "87165492",
	I0814 17:01:58.468838   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.468846   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.468858   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.468869   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.468877   50203 command_runner.go:130] >     },
	I0814 17:01:58.468885   50203 command_runner.go:130] >     {
	I0814 17:01:58.468898   50203 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0814 17:01:58.468905   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.468914   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0814 17:01:58.468923   50203 command_runner.go:130] >       ],
	I0814 17:01:58.468931   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.468946   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0814 17:01:58.468961   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0814 17:01:58.468971   50203 command_runner.go:130] >       ],
	I0814 17:01:58.468980   50203 command_runner.go:130] >       "size": "87190579",
	I0814 17:01:58.468989   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.469000   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.469009   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469017   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469026   50203 command_runner.go:130] >     },
	I0814 17:01:58.469032   50203 command_runner.go:130] >     {
	I0814 17:01:58.469046   50203 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0814 17:01:58.469055   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.469065   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0814 17:01:58.469073   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469081   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.469095   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0814 17:01:58.469110   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0814 17:01:58.469119   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469127   50203 command_runner.go:130] >       "size": "1363676",
	I0814 17:01:58.469137   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.469146   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.469155   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469163   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469170   50203 command_runner.go:130] >     },
	I0814 17:01:58.469178   50203 command_runner.go:130] >     {
	I0814 17:01:58.469189   50203 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0814 17:01:58.469197   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.469208   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0814 17:01:58.469216   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469223   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.469239   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0814 17:01:58.469258   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0814 17:01:58.469266   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469273   50203 command_runner.go:130] >       "size": "31470524",
	I0814 17:01:58.469283   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.469291   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.469300   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469308   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469335   50203 command_runner.go:130] >     },
	I0814 17:01:58.469344   50203 command_runner.go:130] >     {
	I0814 17:01:58.469354   50203 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0814 17:01:58.469361   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.469372   50203 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0814 17:01:58.469380   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469388   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.469403   50203 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0814 17:01:58.469419   50203 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0814 17:01:58.469428   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469436   50203 command_runner.go:130] >       "size": "61245718",
	I0814 17:01:58.469445   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.469453   50203 command_runner.go:130] >       "username": "nonroot",
	I0814 17:01:58.469463   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469471   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469479   50203 command_runner.go:130] >     },
	I0814 17:01:58.469486   50203 command_runner.go:130] >     {
	I0814 17:01:58.469498   50203 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0814 17:01:58.469508   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.469518   50203 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0814 17:01:58.469525   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469533   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.469552   50203 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0814 17:01:58.469566   50203 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0814 17:01:58.469573   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469582   50203 command_runner.go:130] >       "size": "149009664",
	I0814 17:01:58.469591   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.469600   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.469606   50203 command_runner.go:130] >       },
	I0814 17:01:58.469614   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.469623   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469632   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469641   50203 command_runner.go:130] >     },
	I0814 17:01:58.469648   50203 command_runner.go:130] >     {
	I0814 17:01:58.469660   50203 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0814 17:01:58.469669   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.469679   50203 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0814 17:01:58.469691   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469702   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.469716   50203 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0814 17:01:58.469732   50203 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0814 17:01:58.469740   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469748   50203 command_runner.go:130] >       "size": "95233506",
	I0814 17:01:58.469755   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.469764   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.469773   50203 command_runner.go:130] >       },
	I0814 17:01:58.469780   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.469787   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469796   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469803   50203 command_runner.go:130] >     },
	I0814 17:01:58.469809   50203 command_runner.go:130] >     {
	I0814 17:01:58.469820   50203 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0814 17:01:58.469837   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.469848   50203 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0814 17:01:58.469857   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469864   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.469887   50203 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0814 17:01:58.469903   50203 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0814 17:01:58.469911   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469918   50203 command_runner.go:130] >       "size": "89437512",
	I0814 17:01:58.469927   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.469936   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.469944   50203 command_runner.go:130] >       },
	I0814 17:01:58.469952   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.469960   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469968   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469976   50203 command_runner.go:130] >     },
	I0814 17:01:58.469982   50203 command_runner.go:130] >     {
	I0814 17:01:58.469995   50203 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0814 17:01:58.470005   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.470014   50203 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0814 17:01:58.470023   50203 command_runner.go:130] >       ],
	I0814 17:01:58.470030   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.470043   50203 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0814 17:01:58.470057   50203 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0814 17:01:58.470066   50203 command_runner.go:130] >       ],
	I0814 17:01:58.470073   50203 command_runner.go:130] >       "size": "92728217",
	I0814 17:01:58.470082   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.470090   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.470099   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.470108   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.470115   50203 command_runner.go:130] >     },
	I0814 17:01:58.470121   50203 command_runner.go:130] >     {
	I0814 17:01:58.470133   50203 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0814 17:01:58.470139   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.470149   50203 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0814 17:01:58.470158   50203 command_runner.go:130] >       ],
	I0814 17:01:58.470166   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.470180   50203 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0814 17:01:58.470196   50203 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0814 17:01:58.470206   50203 command_runner.go:130] >       ],
	I0814 17:01:58.470215   50203 command_runner.go:130] >       "size": "68420936",
	I0814 17:01:58.470224   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.470233   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.470239   50203 command_runner.go:130] >       },
	I0814 17:01:58.470247   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.470257   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.470263   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.470271   50203 command_runner.go:130] >     },
	I0814 17:01:58.470278   50203 command_runner.go:130] >     {
	I0814 17:01:58.470292   50203 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0814 17:01:58.470301   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.470310   50203 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0814 17:01:58.470323   50203 command_runner.go:130] >       ],
	I0814 17:01:58.470332   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.470347   50203 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0814 17:01:58.470361   50203 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0814 17:01:58.470370   50203 command_runner.go:130] >       ],
	I0814 17:01:58.470377   50203 command_runner.go:130] >       "size": "742080",
	I0814 17:01:58.470386   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.470394   50203 command_runner.go:130] >         "value": "65535"
	I0814 17:01:58.470402   50203 command_runner.go:130] >       },
	I0814 17:01:58.470410   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.470419   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.470427   50203 command_runner.go:130] >       "pinned": true
	I0814 17:01:58.470435   50203 command_runner.go:130] >     }
	I0814 17:01:58.470454   50203 command_runner.go:130] >   ]
	I0814 17:01:58.470462   50203 command_runner.go:130] > }
	I0814 17:01:58.470595   50203 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:01:58.470608   50203 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:01:58.470617   50203 kubeadm.go:934] updating node { 192.168.39.36 8443 v1.31.0 crio true true} ...
	I0814 17:01:58.470751   50203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-986999 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-986999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:01:58.470835   50203 ssh_runner.go:195] Run: crio config
	I0814 17:01:58.502180   50203 command_runner.go:130] ! time="2024-08-14 17:01:58.476078358Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0814 17:01:58.507441   50203 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0814 17:01:58.514220   50203 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0814 17:01:58.514241   50203 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0814 17:01:58.514248   50203 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0814 17:01:58.514251   50203 command_runner.go:130] > #
	I0814 17:01:58.514258   50203 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0814 17:01:58.514264   50203 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0814 17:01:58.514270   50203 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0814 17:01:58.514277   50203 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0814 17:01:58.514282   50203 command_runner.go:130] > # reload'.
	I0814 17:01:58.514291   50203 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0814 17:01:58.514301   50203 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0814 17:01:58.514314   50203 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0814 17:01:58.514323   50203 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0814 17:01:58.514328   50203 command_runner.go:130] > [crio]
	I0814 17:01:58.514337   50203 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0814 17:01:58.514344   50203 command_runner.go:130] > # containers images, in this directory.
	I0814 17:01:58.514354   50203 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0814 17:01:58.514365   50203 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0814 17:01:58.514377   50203 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0814 17:01:58.514388   50203 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0814 17:01:58.514393   50203 command_runner.go:130] > # imagestore = ""
	I0814 17:01:58.514405   50203 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0814 17:01:58.514412   50203 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0814 17:01:58.514420   50203 command_runner.go:130] > storage_driver = "overlay"
	I0814 17:01:58.514429   50203 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0814 17:01:58.514435   50203 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0814 17:01:58.514442   50203 command_runner.go:130] > storage_option = [
	I0814 17:01:58.514447   50203 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0814 17:01:58.514452   50203 command_runner.go:130] > ]
	I0814 17:01:58.514458   50203 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0814 17:01:58.514465   50203 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0814 17:01:58.514469   50203 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0814 17:01:58.514474   50203 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0814 17:01:58.514480   50203 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0814 17:01:58.514485   50203 command_runner.go:130] > # always happen on a node reboot
	I0814 17:01:58.514490   50203 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0814 17:01:58.514498   50203 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0814 17:01:58.514506   50203 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0814 17:01:58.514510   50203 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0814 17:01:58.514516   50203 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0814 17:01:58.514523   50203 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0814 17:01:58.514532   50203 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0814 17:01:58.514536   50203 command_runner.go:130] > # internal_wipe = true
	I0814 17:01:58.514545   50203 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0814 17:01:58.514555   50203 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0814 17:01:58.514559   50203 command_runner.go:130] > # internal_repair = false
	I0814 17:01:58.514564   50203 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0814 17:01:58.514570   50203 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0814 17:01:58.514577   50203 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0814 17:01:58.514584   50203 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0814 17:01:58.514590   50203 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0814 17:01:58.514597   50203 command_runner.go:130] > [crio.api]
	I0814 17:01:58.514603   50203 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0814 17:01:58.514609   50203 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0814 17:01:58.514615   50203 command_runner.go:130] > # IP address on which the stream server will listen.
	I0814 17:01:58.514621   50203 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0814 17:01:58.514627   50203 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0814 17:01:58.514635   50203 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0814 17:01:58.514641   50203 command_runner.go:130] > # stream_port = "0"
	I0814 17:01:58.514648   50203 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0814 17:01:58.514653   50203 command_runner.go:130] > # stream_enable_tls = false
	I0814 17:01:58.514660   50203 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0814 17:01:58.514668   50203 command_runner.go:130] > # stream_idle_timeout = ""
	I0814 17:01:58.514674   50203 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0814 17:01:58.514684   50203 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0814 17:01:58.514687   50203 command_runner.go:130] > # minutes.
	I0814 17:01:58.514691   50203 command_runner.go:130] > # stream_tls_cert = ""
	I0814 17:01:58.514697   50203 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0814 17:01:58.514705   50203 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0814 17:01:58.514710   50203 command_runner.go:130] > # stream_tls_key = ""
	I0814 17:01:58.514717   50203 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0814 17:01:58.514723   50203 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0814 17:01:58.514749   50203 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0814 17:01:58.514756   50203 command_runner.go:130] > # stream_tls_ca = ""
	I0814 17:01:58.514763   50203 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0814 17:01:58.514767   50203 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0814 17:01:58.514775   50203 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0814 17:01:58.514781   50203 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0814 17:01:58.514786   50203 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0814 17:01:58.514792   50203 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0814 17:01:58.514796   50203 command_runner.go:130] > [crio.runtime]
	I0814 17:01:58.514802   50203 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0814 17:01:58.514809   50203 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0814 17:01:58.514814   50203 command_runner.go:130] > # "nofile=1024:2048"
	I0814 17:01:58.514828   50203 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0814 17:01:58.514835   50203 command_runner.go:130] > # default_ulimits = [
	I0814 17:01:58.514838   50203 command_runner.go:130] > # ]
	I0814 17:01:58.514860   50203 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0814 17:01:58.514867   50203 command_runner.go:130] > # no_pivot = false
	I0814 17:01:58.514873   50203 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0814 17:01:58.514880   50203 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0814 17:01:58.514885   50203 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0814 17:01:58.514891   50203 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0814 17:01:58.514899   50203 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0814 17:01:58.514906   50203 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0814 17:01:58.514913   50203 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0814 17:01:58.514917   50203 command_runner.go:130] > # Cgroup setting for conmon
	I0814 17:01:58.514923   50203 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0814 17:01:58.514929   50203 command_runner.go:130] > conmon_cgroup = "pod"
	I0814 17:01:58.514935   50203 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0814 17:01:58.514942   50203 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0814 17:01:58.514949   50203 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0814 17:01:58.514955   50203 command_runner.go:130] > conmon_env = [
	I0814 17:01:58.514960   50203 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0814 17:01:58.514966   50203 command_runner.go:130] > ]
	I0814 17:01:58.514971   50203 command_runner.go:130] > # Additional environment variables to set for all the
	I0814 17:01:58.514978   50203 command_runner.go:130] > # containers. These are overridden if set in the
	I0814 17:01:58.514983   50203 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0814 17:01:58.514989   50203 command_runner.go:130] > # default_env = [
	I0814 17:01:58.514992   50203 command_runner.go:130] > # ]
	I0814 17:01:58.514997   50203 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0814 17:01:58.515005   50203 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0814 17:01:58.515010   50203 command_runner.go:130] > # selinux = false
	I0814 17:01:58.515016   50203 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0814 17:01:58.515022   50203 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0814 17:01:58.515028   50203 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0814 17:01:58.515032   50203 command_runner.go:130] > # seccomp_profile = ""
	I0814 17:01:58.515039   50203 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0814 17:01:58.515045   50203 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0814 17:01:58.515053   50203 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0814 17:01:58.515057   50203 command_runner.go:130] > # which might increase security.
	I0814 17:01:58.515063   50203 command_runner.go:130] > # This option is currently deprecated,
	I0814 17:01:58.515069   50203 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0814 17:01:58.515076   50203 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0814 17:01:58.515082   50203 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0814 17:01:58.515090   50203 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0814 17:01:58.515096   50203 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0814 17:01:58.515104   50203 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0814 17:01:58.515109   50203 command_runner.go:130] > # This option supports live configuration reload.
	I0814 17:01:58.515116   50203 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0814 17:01:58.515121   50203 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0814 17:01:58.515128   50203 command_runner.go:130] > # the cgroup blockio controller.
	I0814 17:01:58.515132   50203 command_runner.go:130] > # blockio_config_file = ""
	I0814 17:01:58.515139   50203 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0814 17:01:58.515144   50203 command_runner.go:130] > # blockio parameters.
	I0814 17:01:58.515150   50203 command_runner.go:130] > # blockio_reload = false
	I0814 17:01:58.515157   50203 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0814 17:01:58.515163   50203 command_runner.go:130] > # irqbalance daemon.
	I0814 17:01:58.515168   50203 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0814 17:01:58.515185   50203 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0814 17:01:58.515192   50203 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0814 17:01:58.515199   50203 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0814 17:01:58.515205   50203 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0814 17:01:58.515213   50203 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0814 17:01:58.515218   50203 command_runner.go:130] > # This option supports live configuration reload.
	I0814 17:01:58.515224   50203 command_runner.go:130] > # rdt_config_file = ""
	I0814 17:01:58.515229   50203 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0814 17:01:58.515235   50203 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0814 17:01:58.515256   50203 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0814 17:01:58.515262   50203 command_runner.go:130] > # separate_pull_cgroup = ""
	I0814 17:01:58.515269   50203 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0814 17:01:58.515277   50203 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0814 17:01:58.515281   50203 command_runner.go:130] > # will be added.
	I0814 17:01:58.515285   50203 command_runner.go:130] > # default_capabilities = [
	I0814 17:01:58.515290   50203 command_runner.go:130] > # 	"CHOWN",
	I0814 17:01:58.515294   50203 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0814 17:01:58.515298   50203 command_runner.go:130] > # 	"FSETID",
	I0814 17:01:58.515302   50203 command_runner.go:130] > # 	"FOWNER",
	I0814 17:01:58.515305   50203 command_runner.go:130] > # 	"SETGID",
	I0814 17:01:58.515309   50203 command_runner.go:130] > # 	"SETUID",
	I0814 17:01:58.515312   50203 command_runner.go:130] > # 	"SETPCAP",
	I0814 17:01:58.515316   50203 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0814 17:01:58.515319   50203 command_runner.go:130] > # 	"KILL",
	I0814 17:01:58.515337   50203 command_runner.go:130] > # ]
	I0814 17:01:58.515352   50203 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0814 17:01:58.515364   50203 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0814 17:01:58.515369   50203 command_runner.go:130] > # add_inheritable_capabilities = false
	I0814 17:01:58.515376   50203 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0814 17:01:58.515384   50203 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0814 17:01:58.515388   50203 command_runner.go:130] > default_sysctls = [
	I0814 17:01:58.515397   50203 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0814 17:01:58.515402   50203 command_runner.go:130] > ]
	I0814 17:01:58.515407   50203 command_runner.go:130] > # List of devices on the host that a
	I0814 17:01:58.515419   50203 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0814 17:01:58.515425   50203 command_runner.go:130] > # allowed_devices = [
	I0814 17:01:58.515428   50203 command_runner.go:130] > # 	"/dev/fuse",
	I0814 17:01:58.515432   50203 command_runner.go:130] > # ]
	I0814 17:01:58.515436   50203 command_runner.go:130] > # List of additional devices. specified as
	I0814 17:01:58.515443   50203 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0814 17:01:58.515451   50203 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0814 17:01:58.515457   50203 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0814 17:01:58.515463   50203 command_runner.go:130] > # additional_devices = [
	I0814 17:01:58.515466   50203 command_runner.go:130] > # ]
	I0814 17:01:58.515471   50203 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0814 17:01:58.515477   50203 command_runner.go:130] > # cdi_spec_dirs = [
	I0814 17:01:58.515481   50203 command_runner.go:130] > # 	"/etc/cdi",
	I0814 17:01:58.515487   50203 command_runner.go:130] > # 	"/var/run/cdi",
	I0814 17:01:58.515491   50203 command_runner.go:130] > # ]
	I0814 17:01:58.515497   50203 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0814 17:01:58.515505   50203 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0814 17:01:58.515509   50203 command_runner.go:130] > # Defaults to false.
	I0814 17:01:58.515514   50203 command_runner.go:130] > # device_ownership_from_security_context = false
	I0814 17:01:58.515522   50203 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0814 17:01:58.515527   50203 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0814 17:01:58.515531   50203 command_runner.go:130] > # hooks_dir = [
	I0814 17:01:58.515535   50203 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0814 17:01:58.515541   50203 command_runner.go:130] > # ]
	I0814 17:01:58.515547   50203 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0814 17:01:58.515555   50203 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0814 17:01:58.515560   50203 command_runner.go:130] > # its default mounts from the following two files:
	I0814 17:01:58.515565   50203 command_runner.go:130] > #
	I0814 17:01:58.515570   50203 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0814 17:01:58.515579   50203 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0814 17:01:58.515584   50203 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0814 17:01:58.515589   50203 command_runner.go:130] > #
	I0814 17:01:58.515595   50203 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0814 17:01:58.515602   50203 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0814 17:01:58.515610   50203 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0814 17:01:58.515615   50203 command_runner.go:130] > #      only add mounts it finds in this file.
	I0814 17:01:58.515619   50203 command_runner.go:130] > #
	I0814 17:01:58.515623   50203 command_runner.go:130] > # default_mounts_file = ""
	I0814 17:01:58.515630   50203 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0814 17:01:58.515637   50203 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0814 17:01:58.515641   50203 command_runner.go:130] > pids_limit = 1024
	I0814 17:01:58.515647   50203 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0814 17:01:58.515655   50203 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0814 17:01:58.515661   50203 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0814 17:01:58.515671   50203 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0814 17:01:58.515677   50203 command_runner.go:130] > # log_size_max = -1
	I0814 17:01:58.515683   50203 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0814 17:01:58.515690   50203 command_runner.go:130] > # log_to_journald = false
	I0814 17:01:58.515696   50203 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0814 17:01:58.515703   50203 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0814 17:01:58.515707   50203 command_runner.go:130] > # Path to directory for container attach sockets.
	I0814 17:01:58.515714   50203 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0814 17:01:58.515720   50203 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0814 17:01:58.515726   50203 command_runner.go:130] > # bind_mount_prefix = ""
	I0814 17:01:58.515731   50203 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0814 17:01:58.515735   50203 command_runner.go:130] > # read_only = false
	I0814 17:01:58.515742   50203 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0814 17:01:58.515751   50203 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0814 17:01:58.515756   50203 command_runner.go:130] > # live configuration reload.
	I0814 17:01:58.515760   50203 command_runner.go:130] > # log_level = "info"
	I0814 17:01:58.515765   50203 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0814 17:01:58.515772   50203 command_runner.go:130] > # This option supports live configuration reload.
	I0814 17:01:58.515776   50203 command_runner.go:130] > # log_filter = ""
	I0814 17:01:58.515784   50203 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0814 17:01:58.515792   50203 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0814 17:01:58.515798   50203 command_runner.go:130] > # separated by comma.
	I0814 17:01:58.515805   50203 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 17:01:58.515812   50203 command_runner.go:130] > # uid_mappings = ""
	I0814 17:01:58.515818   50203 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0814 17:01:58.515825   50203 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0814 17:01:58.515829   50203 command_runner.go:130] > # separated by comma.
	I0814 17:01:58.515838   50203 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 17:01:58.515841   50203 command_runner.go:130] > # gid_mappings = ""
	I0814 17:01:58.515848   50203 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0814 17:01:58.515856   50203 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0814 17:01:58.515861   50203 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0814 17:01:58.515870   50203 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 17:01:58.515875   50203 command_runner.go:130] > # minimum_mappable_uid = -1
	I0814 17:01:58.515880   50203 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0814 17:01:58.515888   50203 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0814 17:01:58.515893   50203 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0814 17:01:58.515902   50203 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 17:01:58.515909   50203 command_runner.go:130] > # minimum_mappable_gid = -1
	I0814 17:01:58.515915   50203 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0814 17:01:58.515923   50203 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0814 17:01:58.515928   50203 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0814 17:01:58.515932   50203 command_runner.go:130] > # ctr_stop_timeout = 30
	I0814 17:01:58.515938   50203 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0814 17:01:58.515946   50203 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0814 17:01:58.515951   50203 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0814 17:01:58.515958   50203 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0814 17:01:58.515963   50203 command_runner.go:130] > drop_infra_ctr = false
	I0814 17:01:58.515970   50203 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0814 17:01:58.515976   50203 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0814 17:01:58.515985   50203 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0814 17:01:58.515990   50203 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0814 17:01:58.515997   50203 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0814 17:01:58.516005   50203 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0814 17:01:58.516011   50203 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0814 17:01:58.516017   50203 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0814 17:01:58.516020   50203 command_runner.go:130] > # shared_cpuset = ""
	I0814 17:01:58.516026   50203 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0814 17:01:58.516031   50203 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0814 17:01:58.516035   50203 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0814 17:01:58.516044   50203 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0814 17:01:58.516049   50203 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0814 17:01:58.516055   50203 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0814 17:01:58.516061   50203 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0814 17:01:58.516067   50203 command_runner.go:130] > # enable_criu_support = false
	I0814 17:01:58.516072   50203 command_runner.go:130] > # Enable/disable the generation of the container,
	I0814 17:01:58.516080   50203 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0814 17:01:58.516086   50203 command_runner.go:130] > # enable_pod_events = false
	I0814 17:01:58.516092   50203 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0814 17:01:58.516100   50203 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0814 17:01:58.516105   50203 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0814 17:01:58.516111   50203 command_runner.go:130] > # default_runtime = "runc"
	I0814 17:01:58.516116   50203 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0814 17:01:58.516123   50203 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0814 17:01:58.516133   50203 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0814 17:01:58.516140   50203 command_runner.go:130] > # creation as a file is not desired either.
	I0814 17:01:58.516148   50203 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0814 17:01:58.516155   50203 command_runner.go:130] > # the hostname is being managed dynamically.
	I0814 17:01:58.516159   50203 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0814 17:01:58.516164   50203 command_runner.go:130] > # ]
	I0814 17:01:58.516170   50203 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0814 17:01:58.516178   50203 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0814 17:01:58.516184   50203 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0814 17:01:58.516191   50203 command_runner.go:130] > # Each entry in the table should follow the format:
	I0814 17:01:58.516194   50203 command_runner.go:130] > #
	I0814 17:01:58.516199   50203 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0814 17:01:58.516205   50203 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0814 17:01:58.516227   50203 command_runner.go:130] > # runtime_type = "oci"
	I0814 17:01:58.516234   50203 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0814 17:01:58.516238   50203 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0814 17:01:58.516245   50203 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0814 17:01:58.516250   50203 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0814 17:01:58.516254   50203 command_runner.go:130] > # monitor_env = []
	I0814 17:01:58.516259   50203 command_runner.go:130] > # privileged_without_host_devices = false
	I0814 17:01:58.516265   50203 command_runner.go:130] > # allowed_annotations = []
	I0814 17:01:58.516270   50203 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0814 17:01:58.516275   50203 command_runner.go:130] > # Where:
	I0814 17:01:58.516281   50203 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0814 17:01:58.516291   50203 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0814 17:01:58.516297   50203 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0814 17:01:58.516305   50203 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0814 17:01:58.516309   50203 command_runner.go:130] > #   in $PATH.
	I0814 17:01:58.516317   50203 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0814 17:01:58.516322   50203 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0814 17:01:58.516330   50203 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0814 17:01:58.516334   50203 command_runner.go:130] > #   state.
	I0814 17:01:58.516342   50203 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0814 17:01:58.516348   50203 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0814 17:01:58.516356   50203 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0814 17:01:58.516362   50203 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0814 17:01:58.516370   50203 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0814 17:01:58.516376   50203 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0814 17:01:58.516383   50203 command_runner.go:130] > #   The currently recognized values are:
	I0814 17:01:58.516389   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0814 17:01:58.516398   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0814 17:01:58.516403   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0814 17:01:58.516410   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0814 17:01:58.516421   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0814 17:01:58.516429   50203 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0814 17:01:58.516436   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0814 17:01:58.516444   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0814 17:01:58.516450   50203 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0814 17:01:58.516458   50203 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0814 17:01:58.516462   50203 command_runner.go:130] > #   deprecated option "conmon".
	I0814 17:01:58.516470   50203 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0814 17:01:58.516475   50203 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0814 17:01:58.516484   50203 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0814 17:01:58.516491   50203 command_runner.go:130] > #   should be moved to the container's cgroup
	I0814 17:01:58.516500   50203 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0814 17:01:58.516505   50203 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0814 17:01:58.516510   50203 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0814 17:01:58.516517   50203 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0814 17:01:58.516521   50203 command_runner.go:130] > #
	I0814 17:01:58.516529   50203 command_runner.go:130] > # Using the seccomp notifier feature:
	I0814 17:01:58.516535   50203 command_runner.go:130] > #
	I0814 17:01:58.516541   50203 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0814 17:01:58.516549   50203 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0814 17:01:58.516552   50203 command_runner.go:130] > #
	I0814 17:01:58.516560   50203 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0814 17:01:58.516566   50203 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0814 17:01:58.516572   50203 command_runner.go:130] > #
	I0814 17:01:58.516578   50203 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0814 17:01:58.516581   50203 command_runner.go:130] > # feature.
	I0814 17:01:58.516584   50203 command_runner.go:130] > #
	I0814 17:01:58.516589   50203 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0814 17:01:58.516597   50203 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0814 17:01:58.516603   50203 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0814 17:01:58.516609   50203 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0814 17:01:58.516614   50203 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0814 17:01:58.516617   50203 command_runner.go:130] > #
	I0814 17:01:58.516622   50203 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0814 17:01:58.516628   50203 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0814 17:01:58.516631   50203 command_runner.go:130] > #
	I0814 17:01:58.516636   50203 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0814 17:01:58.516641   50203 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0814 17:01:58.516645   50203 command_runner.go:130] > #
	I0814 17:01:58.516650   50203 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0814 17:01:58.516663   50203 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0814 17:01:58.516666   50203 command_runner.go:130] > # limitation.
	I0814 17:01:58.516671   50203 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0814 17:01:58.516678   50203 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0814 17:01:58.516681   50203 command_runner.go:130] > runtime_type = "oci"
	I0814 17:01:58.516685   50203 command_runner.go:130] > runtime_root = "/run/runc"
	I0814 17:01:58.516689   50203 command_runner.go:130] > runtime_config_path = ""
	I0814 17:01:58.516694   50203 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0814 17:01:58.516700   50203 command_runner.go:130] > monitor_cgroup = "pod"
	I0814 17:01:58.516704   50203 command_runner.go:130] > monitor_exec_cgroup = ""
	I0814 17:01:58.516710   50203 command_runner.go:130] > monitor_env = [
	I0814 17:01:58.516715   50203 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0814 17:01:58.516721   50203 command_runner.go:130] > ]
	I0814 17:01:58.516726   50203 command_runner.go:130] > privileged_without_host_devices = false
	I0814 17:01:58.516732   50203 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0814 17:01:58.516737   50203 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0814 17:01:58.516744   50203 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0814 17:01:58.516753   50203 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0814 17:01:58.516760   50203 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0814 17:01:58.516767   50203 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0814 17:01:58.516776   50203 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0814 17:01:58.516785   50203 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0814 17:01:58.516791   50203 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0814 17:01:58.516798   50203 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0814 17:01:58.516801   50203 command_runner.go:130] > # Example:
	I0814 17:01:58.516805   50203 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0814 17:01:58.516809   50203 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0814 17:01:58.516814   50203 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0814 17:01:58.516818   50203 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0814 17:01:58.516822   50203 command_runner.go:130] > # cpuset = 0
	I0814 17:01:58.516825   50203 command_runner.go:130] > # cpushares = "0-1"
	I0814 17:01:58.516828   50203 command_runner.go:130] > # Where:
	I0814 17:01:58.516833   50203 command_runner.go:130] > # The workload name is workload-type.
	I0814 17:01:58.516839   50203 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0814 17:01:58.516844   50203 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0814 17:01:58.516849   50203 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0814 17:01:58.516856   50203 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0814 17:01:58.516861   50203 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0814 17:01:58.516865   50203 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0814 17:01:58.516871   50203 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0814 17:01:58.516875   50203 command_runner.go:130] > # Default value is set to true
	I0814 17:01:58.516879   50203 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0814 17:01:58.516884   50203 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0814 17:01:58.516888   50203 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0814 17:01:58.516892   50203 command_runner.go:130] > # Default value is set to 'false'
	I0814 17:01:58.516896   50203 command_runner.go:130] > # disable_hostport_mapping = false
	I0814 17:01:58.516902   50203 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0814 17:01:58.516905   50203 command_runner.go:130] > #
	I0814 17:01:58.516919   50203 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0814 17:01:58.516925   50203 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0814 17:01:58.516931   50203 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0814 17:01:58.516937   50203 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0814 17:01:58.516942   50203 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0814 17:01:58.516945   50203 command_runner.go:130] > [crio.image]
	I0814 17:01:58.516950   50203 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0814 17:01:58.516954   50203 command_runner.go:130] > # default_transport = "docker://"
	I0814 17:01:58.516960   50203 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0814 17:01:58.516966   50203 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0814 17:01:58.516969   50203 command_runner.go:130] > # global_auth_file = ""
	I0814 17:01:58.516974   50203 command_runner.go:130] > # The image used to instantiate infra containers.
	I0814 17:01:58.516981   50203 command_runner.go:130] > # This option supports live configuration reload.
	I0814 17:01:58.516985   50203 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0814 17:01:58.516991   50203 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0814 17:01:58.516997   50203 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0814 17:01:58.517002   50203 command_runner.go:130] > # This option supports live configuration reload.
	I0814 17:01:58.517009   50203 command_runner.go:130] > # pause_image_auth_file = ""
	I0814 17:01:58.517015   50203 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0814 17:01:58.517023   50203 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0814 17:01:58.517028   50203 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0814 17:01:58.517036   50203 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0814 17:01:58.517040   50203 command_runner.go:130] > # pause_command = "/pause"
	I0814 17:01:58.517046   50203 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0814 17:01:58.517054   50203 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0814 17:01:58.517060   50203 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0814 17:01:58.517068   50203 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0814 17:01:58.517076   50203 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0814 17:01:58.517082   50203 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0814 17:01:58.517088   50203 command_runner.go:130] > # pinned_images = [
	I0814 17:01:58.517091   50203 command_runner.go:130] > # ]
	I0814 17:01:58.517097   50203 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0814 17:01:58.517105   50203 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0814 17:01:58.517111   50203 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0814 17:01:58.517119   50203 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0814 17:01:58.517124   50203 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0814 17:01:58.517131   50203 command_runner.go:130] > # signature_policy = ""
	I0814 17:01:58.517137   50203 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0814 17:01:58.517146   50203 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0814 17:01:58.517152   50203 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0814 17:01:58.517160   50203 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0814 17:01:58.517165   50203 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0814 17:01:58.517170   50203 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0814 17:01:58.517178   50203 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0814 17:01:58.517184   50203 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0814 17:01:58.517190   50203 command_runner.go:130] > # changing them here.
	I0814 17:01:58.517194   50203 command_runner.go:130] > # insecure_registries = [
	I0814 17:01:58.517197   50203 command_runner.go:130] > # ]
	I0814 17:01:58.517203   50203 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0814 17:01:58.517209   50203 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0814 17:01:58.517213   50203 command_runner.go:130] > # image_volumes = "mkdir"
	I0814 17:01:58.517220   50203 command_runner.go:130] > # Temporary directory to use for storing big files
	I0814 17:01:58.517224   50203 command_runner.go:130] > # big_files_temporary_dir = ""
	I0814 17:01:58.517232   50203 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0814 17:01:58.517236   50203 command_runner.go:130] > # CNI plugins.
	I0814 17:01:58.517242   50203 command_runner.go:130] > [crio.network]
	I0814 17:01:58.517248   50203 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0814 17:01:58.517255   50203 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0814 17:01:58.517259   50203 command_runner.go:130] > # cni_default_network = ""
	I0814 17:01:58.517265   50203 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0814 17:01:58.517270   50203 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0814 17:01:58.517275   50203 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0814 17:01:58.517280   50203 command_runner.go:130] > # plugin_dirs = [
	I0814 17:01:58.517284   50203 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0814 17:01:58.517288   50203 command_runner.go:130] > # ]
	I0814 17:01:58.517295   50203 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0814 17:01:58.517299   50203 command_runner.go:130] > [crio.metrics]
	I0814 17:01:58.517306   50203 command_runner.go:130] > # Globally enable or disable metrics support.
	I0814 17:01:58.517310   50203 command_runner.go:130] > enable_metrics = true
	I0814 17:01:58.517314   50203 command_runner.go:130] > # Specify enabled metrics collectors.
	I0814 17:01:58.517321   50203 command_runner.go:130] > # Per default all metrics are enabled.
	I0814 17:01:58.517326   50203 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0814 17:01:58.517335   50203 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0814 17:01:58.517340   50203 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0814 17:01:58.517347   50203 command_runner.go:130] > # metrics_collectors = [
	I0814 17:01:58.517351   50203 command_runner.go:130] > # 	"operations",
	I0814 17:01:58.517357   50203 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0814 17:01:58.517365   50203 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0814 17:01:58.517369   50203 command_runner.go:130] > # 	"operations_errors",
	I0814 17:01:58.517373   50203 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0814 17:01:58.517377   50203 command_runner.go:130] > # 	"image_pulls_by_name",
	I0814 17:01:58.517381   50203 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0814 17:01:58.517385   50203 command_runner.go:130] > # 	"image_pulls_failures",
	I0814 17:01:58.517389   50203 command_runner.go:130] > # 	"image_pulls_successes",
	I0814 17:01:58.517393   50203 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0814 17:01:58.517396   50203 command_runner.go:130] > # 	"image_layer_reuse",
	I0814 17:01:58.517401   50203 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0814 17:01:58.517407   50203 command_runner.go:130] > # 	"containers_oom_total",
	I0814 17:01:58.517411   50203 command_runner.go:130] > # 	"containers_oom",
	I0814 17:01:58.517419   50203 command_runner.go:130] > # 	"processes_defunct",
	I0814 17:01:58.517423   50203 command_runner.go:130] > # 	"operations_total",
	I0814 17:01:58.517427   50203 command_runner.go:130] > # 	"operations_latency_seconds",
	I0814 17:01:58.517432   50203 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0814 17:01:58.517437   50203 command_runner.go:130] > # 	"operations_errors_total",
	I0814 17:01:58.517442   50203 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0814 17:01:58.517449   50203 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0814 17:01:58.517452   50203 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0814 17:01:58.517457   50203 command_runner.go:130] > # 	"image_pulls_success_total",
	I0814 17:01:58.517461   50203 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0814 17:01:58.517468   50203 command_runner.go:130] > # 	"containers_oom_count_total",
	I0814 17:01:58.517473   50203 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0814 17:01:58.517477   50203 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0814 17:01:58.517480   50203 command_runner.go:130] > # ]
	I0814 17:01:58.517485   50203 command_runner.go:130] > # The port on which the metrics server will listen.
	I0814 17:01:58.517491   50203 command_runner.go:130] > # metrics_port = 9090
	I0814 17:01:58.517496   50203 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0814 17:01:58.517502   50203 command_runner.go:130] > # metrics_socket = ""
	I0814 17:01:58.517508   50203 command_runner.go:130] > # The certificate for the secure metrics server.
	I0814 17:01:58.517519   50203 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0814 17:01:58.517527   50203 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0814 17:01:58.517532   50203 command_runner.go:130] > # certificate on any modification event.
	I0814 17:01:58.517537   50203 command_runner.go:130] > # metrics_cert = ""
	I0814 17:01:58.517542   50203 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0814 17:01:58.517549   50203 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0814 17:01:58.517553   50203 command_runner.go:130] > # metrics_key = ""
	I0814 17:01:58.517558   50203 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0814 17:01:58.517564   50203 command_runner.go:130] > [crio.tracing]
	I0814 17:01:58.517569   50203 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0814 17:01:58.517573   50203 command_runner.go:130] > # enable_tracing = false
	I0814 17:01:58.517578   50203 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0814 17:01:58.517584   50203 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0814 17:01:58.517591   50203 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0814 17:01:58.517598   50203 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0814 17:01:58.517602   50203 command_runner.go:130] > # CRI-O NRI configuration.
	I0814 17:01:58.517605   50203 command_runner.go:130] > [crio.nri]
	I0814 17:01:58.517610   50203 command_runner.go:130] > # Globally enable or disable NRI.
	I0814 17:01:58.517616   50203 command_runner.go:130] > # enable_nri = false
	I0814 17:01:58.517620   50203 command_runner.go:130] > # NRI socket to listen on.
	I0814 17:01:58.517627   50203 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0814 17:01:58.517631   50203 command_runner.go:130] > # NRI plugin directory to use.
	I0814 17:01:58.517638   50203 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0814 17:01:58.517642   50203 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0814 17:01:58.517649   50203 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0814 17:01:58.517654   50203 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0814 17:01:58.517659   50203 command_runner.go:130] > # nri_disable_connections = false
	I0814 17:01:58.517666   50203 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0814 17:01:58.517670   50203 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0814 17:01:58.517678   50203 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0814 17:01:58.517682   50203 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0814 17:01:58.517690   50203 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0814 17:01:58.517694   50203 command_runner.go:130] > [crio.stats]
	I0814 17:01:58.517700   50203 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0814 17:01:58.517708   50203 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0814 17:01:58.517712   50203 command_runner.go:130] > # stats_collection_period = 0
	I0814 17:01:58.517841   50203 cni.go:84] Creating CNI manager for ""
	I0814 17:01:58.517854   50203 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0814 17:01:58.517864   50203 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:01:58.517889   50203 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-986999 NodeName:multinode-986999 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:01:58.518005   50203 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-986999"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:01:58.518066   50203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:01:58.527717   50203 command_runner.go:130] > kubeadm
	I0814 17:01:58.527736   50203 command_runner.go:130] > kubectl
	I0814 17:01:58.527739   50203 command_runner.go:130] > kubelet
	I0814 17:01:58.527846   50203 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:01:58.527899   50203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:01:58.536772   50203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0814 17:01:58.552474   50203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:01:58.570073   50203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0814 17:01:58.587255   50203 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0814 17:01:58.590952   50203 command_runner.go:130] > 192.168.39.36	control-plane.minikube.internal
	I0814 17:01:58.591020   50203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:01:58.740761   50203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:01:58.754839   50203 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999 for IP: 192.168.39.36
	I0814 17:01:58.754873   50203 certs.go:194] generating shared ca certs ...
	I0814 17:01:58.754897   50203 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:01:58.755063   50203 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:01:58.755118   50203 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:01:58.755132   50203 certs.go:256] generating profile certs ...
	I0814 17:01:58.755239   50203 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/client.key
	I0814 17:01:58.755313   50203 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/apiserver.key.fc6ade07
	I0814 17:01:58.755397   50203 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/proxy-client.key
	I0814 17:01:58.755412   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 17:01:58.755435   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 17:01:58.755457   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 17:01:58.755479   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 17:01:58.755498   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 17:01:58.755519   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 17:01:58.755544   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 17:01:58.755591   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 17:01:58.755721   50203 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:01:58.755817   50203 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:01:58.755832   50203 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:01:58.755875   50203 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:01:58.755909   50203 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:01:58.755940   50203 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:01:58.756000   50203 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:01:58.756049   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:01:58.756071   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem -> /usr/share/ca-certificates/21177.pem
	I0814 17:01:58.756091   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /usr/share/ca-certificates/211772.pem
	I0814 17:01:58.756669   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:01:58.780635   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:01:58.803117   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:01:58.826051   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:01:58.847780   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 17:01:58.870900   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 17:01:58.893304   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:01:58.915703   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:01:58.939624   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:01:58.962946   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:01:58.986989   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:01:59.009969   50203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:01:59.025705   50203 ssh_runner.go:195] Run: openssl version
	I0814 17:01:59.030993   50203 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0814 17:01:59.031080   50203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:01:59.041216   50203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:01:59.045139   50203 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:01:59.045217   50203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:01:59.045280   50203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:01:59.050700   50203 command_runner.go:130] > 3ec20f2e
	I0814 17:01:59.050778   50203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:01:59.060129   50203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:01:59.070552   50203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:01:59.074745   50203 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:01:59.074776   50203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:01:59.074814   50203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:01:59.080099   50203 command_runner.go:130] > b5213941
	I0814 17:01:59.080165   50203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:01:59.088654   50203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:01:59.099213   50203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:01:59.103377   50203 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:01:59.103411   50203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:01:59.103449   50203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:01:59.108876   50203 command_runner.go:130] > 51391683
	I0814 17:01:59.108948   50203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:01:59.117495   50203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:01:59.121624   50203 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:01:59.121643   50203 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0814 17:01:59.121649   50203 command_runner.go:130] > Device: 253,1	Inode: 7338518     Links: 1
	I0814 17:01:59.121657   50203 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0814 17:01:59.121666   50203 command_runner.go:130] > Access: 2024-08-14 16:54:48.371171037 +0000
	I0814 17:01:59.121674   50203 command_runner.go:130] > Modify: 2024-08-14 16:54:48.371171037 +0000
	I0814 17:01:59.121682   50203 command_runner.go:130] > Change: 2024-08-14 16:54:48.371171037 +0000
	I0814 17:01:59.121690   50203 command_runner.go:130] >  Birth: 2024-08-14 16:54:48.371171037 +0000
	I0814 17:01:59.121738   50203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:01:59.126721   50203 command_runner.go:130] > Certificate will not expire
	I0814 17:01:59.126857   50203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:01:59.131834   50203 command_runner.go:130] > Certificate will not expire
	I0814 17:01:59.131992   50203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:01:59.136994   50203 command_runner.go:130] > Certificate will not expire
	I0814 17:01:59.137054   50203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:01:59.142234   50203 command_runner.go:130] > Certificate will not expire
	I0814 17:01:59.142298   50203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:01:59.147295   50203 command_runner.go:130] > Certificate will not expire
	I0814 17:01:59.147352   50203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:01:59.152289   50203 command_runner.go:130] > Certificate will not expire
	I0814 17:01:59.152418   50203 kubeadm.go:392] StartCluster: {Name:multinode-986999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-986999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:01:59.152545   50203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:01:59.152601   50203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:01:59.188529   50203 command_runner.go:130] > 477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6
	I0814 17:01:59.188562   50203 command_runner.go:130] > 53655be94b95a012b16d4cc8addb5eb89b496fee03e6df4f6a08ce81e2d465e1
	I0814 17:01:59.188573   50203 command_runner.go:130] > 7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15
	I0814 17:01:59.188583   50203 command_runner.go:130] > 065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14
	I0814 17:01:59.188592   50203 command_runner.go:130] > 8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c
	I0814 17:01:59.188602   50203 command_runner.go:130] > 8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6
	I0814 17:01:59.188612   50203 command_runner.go:130] > 6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391
	I0814 17:01:59.188627   50203 command_runner.go:130] > 89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b
	I0814 17:01:59.188656   50203 cri.go:89] found id: "477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6"
	I0814 17:01:59.188668   50203 cri.go:89] found id: "53655be94b95a012b16d4cc8addb5eb89b496fee03e6df4f6a08ce81e2d465e1"
	I0814 17:01:59.188679   50203 cri.go:89] found id: "7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15"
	I0814 17:01:59.188685   50203 cri.go:89] found id: "065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14"
	I0814 17:01:59.188697   50203 cri.go:89] found id: "8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c"
	I0814 17:01:59.188707   50203 cri.go:89] found id: "8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6"
	I0814 17:01:59.188712   50203 cri.go:89] found id: "6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391"
	I0814 17:01:59.188722   50203 cri.go:89] found id: "89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b"
	I0814 17:01:59.188727   50203 cri.go:89] found id: ""
	I0814 17:01:59.188786   50203 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.895126831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655022895099793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=743080f7-0d22-4547-95d4-ff5d515c3df7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.895576454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40f412be-3ba9-4d61-9481-ca29ee74430c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.895651561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40f412be-3ba9-4d61-9481-ca29ee74430c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.896076480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9efaf008e647ffcb5f0c423a583a70e502d0ea59692e641ee7de27fa83bb1937,PodSandboxId:7088c953f9919fc941dea99184e30e15de825db4abc05fe9d5144e49b592c2fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723654958637851468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b00eff2e1f5a6ebafac3003a2f80b57798117d69a2cb39aab343f964cace12,PodSandboxId:ce87834f1ac6dd64242c171bdb344ac70587e5f69a887a77dccd74c1f20c0ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723654925142388498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093e81907d400a0e8ad10bcf1345d2cda5c5998f3d2e270183919eeed79d16c9,PodSandboxId:88e6d7a45fe69132bbb6e9f72e6ce97524fce7ae3a02563652e50328288e573e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723654925090206101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e4b77fe8c4c74a9ab92cafdf2ebee61958c4f16d8258caf39d207a7f149da3,PodSandboxId:f76570df814da4afd1a258d16091a2faffe2f4b87159cbb7a2c6d79fbd15d97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723654924910925811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e543773f8b925101af65c0c17102fe3ac7a686565faf3adc98871a29fec93f7,PodSandboxId:e005cf5ff5a20e92b32be25934087564b0c3836e35fda89e3e62ff1ada53f170,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723654924967863633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9a993eb9457bbb830323abdb835c9e4cc6ee50aed085f14af5c2228577a473,PodSandboxId:1957784acd36be388d7d7b812461cf0ed476328aceea4a7842966e39fe0116e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723654921092802979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018e5ee7d09971a63b0a8f3373f4295514885455f4d14e303d0475276c613f1,PodSandboxId:653a47cf7bd0ad47bcef95ff44bb427854e43a9237411b88c1249e95c65eed46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723654921070475613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca92dab795508a7cd6103305623d25d2fffaa671df4ba15094a97c1296844947,PodSandboxId:d8dddb3cbe2ea008f8f24f5bcc3a457b2b36a4b3777a1f369d40e14c82862570,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723654921048073623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1112d232a855de91079496d16db1b2dad08932f18afaf02e62ccf6f32bd12429,PodSandboxId:4f9c1cc51cafc809884b3a0fb23c9912e32fa5ac54a03bb81004df8194aad7ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723654920994237628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e440bc0c9c2cfd95e8b723799d7c57c007aa08237a242b3763cc25c6b932245,PodSandboxId:c17ef5766c346daf8345ef8070bec4b9bef4af264b2342f41616055d301ea79f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723654604787376502,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6,PodSandboxId:a9ffa8acdb931a869f922af0c28d767f7b32dffb9e7d75a86c71f9c36d98d10c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723654520060030522,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53655be94b95a012b16d4cc8addb5eb89b496fee03e6df4f6a08ce81e2d465e1,PodSandboxId:5ae877f7722e790345f8a381cb713300c946bc1753f165cd9443f5762c16d072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723654519174701404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15,PodSandboxId:be28d4077d679139e5e8a317aa2743d167625bcd899bd1d700dce6836d9511d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723654507544227848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14,PodSandboxId:6c7ad039d313b6500b38f08f0c5ea577054a1b26eb05382f3f9d240537305a2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723654504566635135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391,PodSandboxId:6f263cd667e0264183be2e699936fcbbd81efbf53eec3f0092b968a88a38d413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723654492878509220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c,PodSandboxId:e9bd2d388e99bcd986cb8e43291b44970815f560362facc95eda8d6aa07e789c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723654492915103545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6,PodSandboxId:a9d0f10c7e34745c0d0d54694b2c4b0eeeb9d45d4dec3b0c4bcfe0488683a919,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723654492879186055,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b,PodSandboxId:750456da3a0064edaba7def836de6b47d1e98aead0e72e70e090923dcb13183b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723654492837085799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40f412be-3ba9-4d61-9481-ca29ee74430c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.937136297Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2f6d52e1-031c-4ca3-8a6b-8ac016f91b59 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.937208907Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2f6d52e1-031c-4ca3-8a6b-8ac016f91b59 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.938298051Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bbcd4209-eb9b-417a-90c4-8b7ebf5e1d56 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.938753781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655022938731800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bbcd4209-eb9b-417a-90c4-8b7ebf5e1d56 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.939389395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47b70ca6-d6e0-4566-a039-e75c98512344 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.939462441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47b70ca6-d6e0-4566-a039-e75c98512344 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.939829964Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9efaf008e647ffcb5f0c423a583a70e502d0ea59692e641ee7de27fa83bb1937,PodSandboxId:7088c953f9919fc941dea99184e30e15de825db4abc05fe9d5144e49b592c2fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723654958637851468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b00eff2e1f5a6ebafac3003a2f80b57798117d69a2cb39aab343f964cace12,PodSandboxId:ce87834f1ac6dd64242c171bdb344ac70587e5f69a887a77dccd74c1f20c0ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723654925142388498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093e81907d400a0e8ad10bcf1345d2cda5c5998f3d2e270183919eeed79d16c9,PodSandboxId:88e6d7a45fe69132bbb6e9f72e6ce97524fce7ae3a02563652e50328288e573e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723654925090206101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e4b77fe8c4c74a9ab92cafdf2ebee61958c4f16d8258caf39d207a7f149da3,PodSandboxId:f76570df814da4afd1a258d16091a2faffe2f4b87159cbb7a2c6d79fbd15d97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723654924910925811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e543773f8b925101af65c0c17102fe3ac7a686565faf3adc98871a29fec93f7,PodSandboxId:e005cf5ff5a20e92b32be25934087564b0c3836e35fda89e3e62ff1ada53f170,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723654924967863633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9a993eb9457bbb830323abdb835c9e4cc6ee50aed085f14af5c2228577a473,PodSandboxId:1957784acd36be388d7d7b812461cf0ed476328aceea4a7842966e39fe0116e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723654921092802979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018e5ee7d09971a63b0a8f3373f4295514885455f4d14e303d0475276c613f1,PodSandboxId:653a47cf7bd0ad47bcef95ff44bb427854e43a9237411b88c1249e95c65eed46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723654921070475613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca92dab795508a7cd6103305623d25d2fffaa671df4ba15094a97c1296844947,PodSandboxId:d8dddb3cbe2ea008f8f24f5bcc3a457b2b36a4b3777a1f369d40e14c82862570,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723654921048073623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1112d232a855de91079496d16db1b2dad08932f18afaf02e62ccf6f32bd12429,PodSandboxId:4f9c1cc51cafc809884b3a0fb23c9912e32fa5ac54a03bb81004df8194aad7ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723654920994237628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e440bc0c9c2cfd95e8b723799d7c57c007aa08237a242b3763cc25c6b932245,PodSandboxId:c17ef5766c346daf8345ef8070bec4b9bef4af264b2342f41616055d301ea79f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723654604787376502,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6,PodSandboxId:a9ffa8acdb931a869f922af0c28d767f7b32dffb9e7d75a86c71f9c36d98d10c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723654520060030522,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53655be94b95a012b16d4cc8addb5eb89b496fee03e6df4f6a08ce81e2d465e1,PodSandboxId:5ae877f7722e790345f8a381cb713300c946bc1753f165cd9443f5762c16d072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723654519174701404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15,PodSandboxId:be28d4077d679139e5e8a317aa2743d167625bcd899bd1d700dce6836d9511d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723654507544227848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14,PodSandboxId:6c7ad039d313b6500b38f08f0c5ea577054a1b26eb05382f3f9d240537305a2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723654504566635135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391,PodSandboxId:6f263cd667e0264183be2e699936fcbbd81efbf53eec3f0092b968a88a38d413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723654492878509220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c,PodSandboxId:e9bd2d388e99bcd986cb8e43291b44970815f560362facc95eda8d6aa07e789c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723654492915103545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6,PodSandboxId:a9d0f10c7e34745c0d0d54694b2c4b0eeeb9d45d4dec3b0c4bcfe0488683a919,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723654492879186055,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b,PodSandboxId:750456da3a0064edaba7def836de6b47d1e98aead0e72e70e090923dcb13183b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723654492837085799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47b70ca6-d6e0-4566-a039-e75c98512344 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.983139672Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5d607ea-aa7b-49ce-8e1b-0e113e69b98e name=/runtime.v1.RuntimeService/Version
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.983214969Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5d607ea-aa7b-49ce-8e1b-0e113e69b98e name=/runtime.v1.RuntimeService/Version
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.984375228Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e6a571b-c0d7-4ec9-ad11-e4df8060b041 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.984796304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655022984770254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e6a571b-c0d7-4ec9-ad11-e4df8060b041 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.985238720Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a35c0c05-53f7-44b2-a4bb-7384ad552bf0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.985292902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a35c0c05-53f7-44b2-a4bb-7384ad552bf0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:03:42 multinode-986999 crio[2813]: time="2024-08-14 17:03:42.985652150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9efaf008e647ffcb5f0c423a583a70e502d0ea59692e641ee7de27fa83bb1937,PodSandboxId:7088c953f9919fc941dea99184e30e15de825db4abc05fe9d5144e49b592c2fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723654958637851468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b00eff2e1f5a6ebafac3003a2f80b57798117d69a2cb39aab343f964cace12,PodSandboxId:ce87834f1ac6dd64242c171bdb344ac70587e5f69a887a77dccd74c1f20c0ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723654925142388498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093e81907d400a0e8ad10bcf1345d2cda5c5998f3d2e270183919eeed79d16c9,PodSandboxId:88e6d7a45fe69132bbb6e9f72e6ce97524fce7ae3a02563652e50328288e573e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723654925090206101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e4b77fe8c4c74a9ab92cafdf2ebee61958c4f16d8258caf39d207a7f149da3,PodSandboxId:f76570df814da4afd1a258d16091a2faffe2f4b87159cbb7a2c6d79fbd15d97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723654924910925811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e543773f8b925101af65c0c17102fe3ac7a686565faf3adc98871a29fec93f7,PodSandboxId:e005cf5ff5a20e92b32be25934087564b0c3836e35fda89e3e62ff1ada53f170,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723654924967863633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9a993eb9457bbb830323abdb835c9e4cc6ee50aed085f14af5c2228577a473,PodSandboxId:1957784acd36be388d7d7b812461cf0ed476328aceea4a7842966e39fe0116e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723654921092802979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018e5ee7d09971a63b0a8f3373f4295514885455f4d14e303d0475276c613f1,PodSandboxId:653a47cf7bd0ad47bcef95ff44bb427854e43a9237411b88c1249e95c65eed46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723654921070475613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca92dab795508a7cd6103305623d25d2fffaa671df4ba15094a97c1296844947,PodSandboxId:d8dddb3cbe2ea008f8f24f5bcc3a457b2b36a4b3777a1f369d40e14c82862570,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723654921048073623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1112d232a855de91079496d16db1b2dad08932f18afaf02e62ccf6f32bd12429,PodSandboxId:4f9c1cc51cafc809884b3a0fb23c9912e32fa5ac54a03bb81004df8194aad7ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723654920994237628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e440bc0c9c2cfd95e8b723799d7c57c007aa08237a242b3763cc25c6b932245,PodSandboxId:c17ef5766c346daf8345ef8070bec4b9bef4af264b2342f41616055d301ea79f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723654604787376502,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6,PodSandboxId:a9ffa8acdb931a869f922af0c28d767f7b32dffb9e7d75a86c71f9c36d98d10c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723654520060030522,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53655be94b95a012b16d4cc8addb5eb89b496fee03e6df4f6a08ce81e2d465e1,PodSandboxId:5ae877f7722e790345f8a381cb713300c946bc1753f165cd9443f5762c16d072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723654519174701404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15,PodSandboxId:be28d4077d679139e5e8a317aa2743d167625bcd899bd1d700dce6836d9511d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723654507544227848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14,PodSandboxId:6c7ad039d313b6500b38f08f0c5ea577054a1b26eb05382f3f9d240537305a2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723654504566635135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391,PodSandboxId:6f263cd667e0264183be2e699936fcbbd81efbf53eec3f0092b968a88a38d413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723654492878509220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c,PodSandboxId:e9bd2d388e99bcd986cb8e43291b44970815f560362facc95eda8d6aa07e789c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723654492915103545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6,PodSandboxId:a9d0f10c7e34745c0d0d54694b2c4b0eeeb9d45d4dec3b0c4bcfe0488683a919,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723654492879186055,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b,PodSandboxId:750456da3a0064edaba7def836de6b47d1e98aead0e72e70e090923dcb13183b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723654492837085799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a35c0c05-53f7-44b2-a4bb-7384ad552bf0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:03:43 multinode-986999 crio[2813]: time="2024-08-14 17:03:43.029192753Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=969d4797-347b-4560-8173-f38fe2580dfe name=/runtime.v1.RuntimeService/Version
	Aug 14 17:03:43 multinode-986999 crio[2813]: time="2024-08-14 17:03:43.029309398Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=969d4797-347b-4560-8173-f38fe2580dfe name=/runtime.v1.RuntimeService/Version
	Aug 14 17:03:43 multinode-986999 crio[2813]: time="2024-08-14 17:03:43.031183144Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5221879d-576c-4c71-91b4-c56307434256 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:03:43 multinode-986999 crio[2813]: time="2024-08-14 17:03:43.031764722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655023031734464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5221879d-576c-4c71-91b4-c56307434256 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:03:43 multinode-986999 crio[2813]: time="2024-08-14 17:03:43.032317201Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24848a13-f1ed-434f-b3cc-659b63312202 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:03:43 multinode-986999 crio[2813]: time="2024-08-14 17:03:43.032398867Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24848a13-f1ed-434f-b3cc-659b63312202 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:03:43 multinode-986999 crio[2813]: time="2024-08-14 17:03:43.032793480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9efaf008e647ffcb5f0c423a583a70e502d0ea59692e641ee7de27fa83bb1937,PodSandboxId:7088c953f9919fc941dea99184e30e15de825db4abc05fe9d5144e49b592c2fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723654958637851468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b00eff2e1f5a6ebafac3003a2f80b57798117d69a2cb39aab343f964cace12,PodSandboxId:ce87834f1ac6dd64242c171bdb344ac70587e5f69a887a77dccd74c1f20c0ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723654925142388498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093e81907d400a0e8ad10bcf1345d2cda5c5998f3d2e270183919eeed79d16c9,PodSandboxId:88e6d7a45fe69132bbb6e9f72e6ce97524fce7ae3a02563652e50328288e573e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723654925090206101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e4b77fe8c4c74a9ab92cafdf2ebee61958c4f16d8258caf39d207a7f149da3,PodSandboxId:f76570df814da4afd1a258d16091a2faffe2f4b87159cbb7a2c6d79fbd15d97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723654924910925811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e543773f8b925101af65c0c17102fe3ac7a686565faf3adc98871a29fec93f7,PodSandboxId:e005cf5ff5a20e92b32be25934087564b0c3836e35fda89e3e62ff1ada53f170,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723654924967863633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9a993eb9457bbb830323abdb835c9e4cc6ee50aed085f14af5c2228577a473,PodSandboxId:1957784acd36be388d7d7b812461cf0ed476328aceea4a7842966e39fe0116e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723654921092802979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018e5ee7d09971a63b0a8f3373f4295514885455f4d14e303d0475276c613f1,PodSandboxId:653a47cf7bd0ad47bcef95ff44bb427854e43a9237411b88c1249e95c65eed46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723654921070475613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca92dab795508a7cd6103305623d25d2fffaa671df4ba15094a97c1296844947,PodSandboxId:d8dddb3cbe2ea008f8f24f5bcc3a457b2b36a4b3777a1f369d40e14c82862570,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723654921048073623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1112d232a855de91079496d16db1b2dad08932f18afaf02e62ccf6f32bd12429,PodSandboxId:4f9c1cc51cafc809884b3a0fb23c9912e32fa5ac54a03bb81004df8194aad7ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723654920994237628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e440bc0c9c2cfd95e8b723799d7c57c007aa08237a242b3763cc25c6b932245,PodSandboxId:c17ef5766c346daf8345ef8070bec4b9bef4af264b2342f41616055d301ea79f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723654604787376502,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6,PodSandboxId:a9ffa8acdb931a869f922af0c28d767f7b32dffb9e7d75a86c71f9c36d98d10c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723654520060030522,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53655be94b95a012b16d4cc8addb5eb89b496fee03e6df4f6a08ce81e2d465e1,PodSandboxId:5ae877f7722e790345f8a381cb713300c946bc1753f165cd9443f5762c16d072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723654519174701404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15,PodSandboxId:be28d4077d679139e5e8a317aa2743d167625bcd899bd1d700dce6836d9511d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723654507544227848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14,PodSandboxId:6c7ad039d313b6500b38f08f0c5ea577054a1b26eb05382f3f9d240537305a2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723654504566635135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391,PodSandboxId:6f263cd667e0264183be2e699936fcbbd81efbf53eec3f0092b968a88a38d413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723654492878509220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c,PodSandboxId:e9bd2d388e99bcd986cb8e43291b44970815f560362facc95eda8d6aa07e789c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723654492915103545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6,PodSandboxId:a9d0f10c7e34745c0d0d54694b2c4b0eeeb9d45d4dec3b0c4bcfe0488683a919,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723654492879186055,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b,PodSandboxId:750456da3a0064edaba7def836de6b47d1e98aead0e72e70e090923dcb13183b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723654492837085799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24848a13-f1ed-434f-b3cc-659b63312202 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9efaf008e647f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   7088c953f9919       busybox-7dff88458-2skwv
	b3b00eff2e1f5       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   ce87834f1ac6d       kindnet-pd9v2
	093e81907d400       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   88e6d7a45fe69       coredns-6f6b679f8f-sxtq9
	2e543773f8b92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   e005cf5ff5a20       storage-provisioner
	c8e4b77fe8c4c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   f76570df814da       kube-proxy-l2f8r
	2b9a993eb9457       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   1957784acd36b       kube-controller-manager-multinode-986999
	a018e5ee7d099       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   653a47cf7bd0a       kube-apiserver-multinode-986999
	ca92dab795508       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   d8dddb3cbe2ea       kube-scheduler-multinode-986999
	1112d232a855d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   4f9c1cc51cafc       etcd-multinode-986999
	0e440bc0c9c2c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   c17ef5766c346       busybox-7dff88458-2skwv
	477bf50a43a44       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   a9ffa8acdb931       coredns-6f6b679f8f-sxtq9
	53655be94b95a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   5ae877f7722e7       storage-provisioner
	7fa4efe1c9de6       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   be28d4077d679       kindnet-pd9v2
	065061677ad51       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   6c7ad039d313b       kube-proxy-l2f8r
	8854bb6d7d4f1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   e9bd2d388e99b       etcd-multinode-986999
	8dca3959236fa       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   a9d0f10c7e347       kube-apiserver-multinode-986999
	6bd57a8e25a7e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   6f263cd667e02       kube-scheduler-multinode-986999
	89325e75b717c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   750456da3a006       kube-controller-manager-multinode-986999
	
	
	==> coredns [093e81907d400a0e8ad10bcf1345d2cda5c5998f3d2e270183919eeed79d16c9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33358 - 51070 "HINFO IN 9118466147107003365.2558687126417989913. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015072085s
	
	
	==> coredns [477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6] <==
	[INFO] 10.244.1.2:58553 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001524963s
	[INFO] 10.244.1.2:34174 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011503s
	[INFO] 10.244.1.2:34890 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061163s
	[INFO] 10.244.1.2:60403 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001044386s
	[INFO] 10.244.1.2:59354 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087562s
	[INFO] 10.244.1.2:55946 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090177s
	[INFO] 10.244.1.2:53971 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069688s
	[INFO] 10.244.0.3:52297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115786s
	[INFO] 10.244.0.3:36077 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060413s
	[INFO] 10.244.0.3:56632 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078159s
	[INFO] 10.244.0.3:37016 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064659s
	[INFO] 10.244.1.2:45446 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140185s
	[INFO] 10.244.1.2:34404 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010402s
	[INFO] 10.244.1.2:49653 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078768s
	[INFO] 10.244.1.2:41303 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063224s
	[INFO] 10.244.0.3:56483 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152376s
	[INFO] 10.244.0.3:56184 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155033s
	[INFO] 10.244.0.3:55676 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090162s
	[INFO] 10.244.0.3:54612 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115457s
	[INFO] 10.244.1.2:60739 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144412s
	[INFO] 10.244.1.2:41073 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000079545s
	[INFO] 10.244.1.2:36117 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005353s
	[INFO] 10.244.1.2:45825 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051833s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-986999
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-986999
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=multinode-986999
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T16_54_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:54:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-986999
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 17:03:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 17:02:03 +0000   Wed, 14 Aug 2024 16:54:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 17:02:03 +0000   Wed, 14 Aug 2024 16:54:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 17:02:03 +0000   Wed, 14 Aug 2024 16:54:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 17:02:03 +0000   Wed, 14 Aug 2024 16:55:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    multinode-986999
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b4338ad0ff74c569f7865e4276ec804
	  System UUID:                8b4338ad-0ff7-4c56-9f78-65e4276ec804
	  Boot ID:                    8dfea163-0bba-4fa4-8bd9-627d2be7c5a6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2skwv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 coredns-6f6b679f8f-sxtq9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m40s
	  kube-system                 etcd-multinode-986999                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m46s
	  kube-system                 kindnet-pd9v2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m40s
	  kube-system                 kube-apiserver-multinode-986999             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m46s
	  kube-system                 kube-controller-manager-multinode-986999    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m46s
	  kube-system                 kube-proxy-l2f8r                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m40s
	  kube-system                 kube-scheduler-multinode-986999             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m46s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m38s                  kube-proxy       
	  Normal  Starting                 97s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m51s (x8 over 8m51s)  kubelet          Node multinode-986999 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m51s (x8 over 8m51s)  kubelet          Node multinode-986999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m51s (x7 over 8m51s)  kubelet          Node multinode-986999 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m46s                  kubelet          Node multinode-986999 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m46s                  kubelet          Node multinode-986999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m46s                  kubelet          Node multinode-986999 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m46s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m41s                  node-controller  Node multinode-986999 event: Registered Node multinode-986999 in Controller
	  Normal  NodeReady                8m25s                  kubelet          Node multinode-986999 status is now: NodeReady
	  Normal  Starting                 103s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)    kubelet          Node multinode-986999 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)    kubelet          Node multinode-986999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)    kubelet          Node multinode-986999 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           96s                    node-controller  Node multinode-986999 event: Registered Node multinode-986999 in Controller
	
	
	Name:               multinode-986999-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-986999-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=multinode-986999
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T17_02_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 17:02:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-986999-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 17:03:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 17:03:12 +0000   Wed, 14 Aug 2024 17:02:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 17:03:12 +0000   Wed, 14 Aug 2024 17:02:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 17:03:12 +0000   Wed, 14 Aug 2024 17:02:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 17:03:12 +0000   Wed, 14 Aug 2024 17:03:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    multinode-986999-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1f7ca59b9c5475f8911ae2c26758d51
	  System UUID:                e1f7ca59-b9c5-475f-8911-ae2c26758d51
	  Boot ID:                    97e9ef56-3c6c-469b-a4ef-4e1fb4f914a1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6b2gm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-ndvs5              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m53s
	  kube-system                 kube-proxy-5dgq9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m42s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m53s (x2 over 7m54s)  kubelet     Node multinode-986999-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m53s (x2 over 7m54s)  kubelet     Node multinode-986999-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m53s (x2 over 7m54s)  kubelet     Node multinode-986999-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m53s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m4s                   kubelet     Node multinode-986999-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-986999-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-986999-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-986999-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-986999-m02 status is now: NodeReady
	
	
	Name:               multinode-986999-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-986999-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=multinode-986999
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T17_03_21_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 17:03:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-986999-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 17:03:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 17:03:40 +0000   Wed, 14 Aug 2024 17:03:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 17:03:40 +0000   Wed, 14 Aug 2024 17:03:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 17:03:40 +0000   Wed, 14 Aug 2024 17:03:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 17:03:40 +0000   Wed, 14 Aug 2024 17:03:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    multinode-986999-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ca30703e65244b48be15adb6b811370
	  System UUID:                3ca30703-e652-44b4-8be1-5adb6b811370
	  Boot ID:                    5496f2c0-0870-44a1-9020-5413f4eae583
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-zn75c       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m31s
	  kube-system                 kube-proxy-68bq4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m25s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m37s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m31s (x2 over 6m31s)  kubelet     Node multinode-986999-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s (x2 over 6m31s)  kubelet     Node multinode-986999-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s (x2 over 6m31s)  kubelet     Node multinode-986999-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m11s                  kubelet     Node multinode-986999-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m42s (x2 over 5m42s)  kubelet     Node multinode-986999-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m42s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m42s (x2 over 5m42s)  kubelet     Node multinode-986999-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m42s (x2 over 5m42s)  kubelet     Node multinode-986999-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m23s                  kubelet     Node multinode-986999-m03 status is now: NodeReady
	  Normal  Starting                 23s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-986999-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-986999-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-986999-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-986999-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.059899] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067155] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.168725] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.141067] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.288753] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.856843] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +4.583370] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.058974] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.987951] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.097983] kauditd_printk_skb: 69 callbacks suppressed
	[Aug14 16:55] systemd-fstab-generator[1343]: Ignoring "noauto" option for root device
	[  +0.114645] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.039058] kauditd_printk_skb: 67 callbacks suppressed
	[Aug14 16:56] kauditd_printk_skb: 14 callbacks suppressed
	[Aug14 17:01] systemd-fstab-generator[2726]: Ignoring "noauto" option for root device
	[  +0.151499] systemd-fstab-generator[2738]: Ignoring "noauto" option for root device
	[  +0.169748] systemd-fstab-generator[2752]: Ignoring "noauto" option for root device
	[  +0.141227] systemd-fstab-generator[2764]: Ignoring "noauto" option for root device
	[  +0.275942] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.680551] systemd-fstab-generator[2898]: Ignoring "noauto" option for root device
	[  +1.545289] systemd-fstab-generator[3019]: Ignoring "noauto" option for root device
	[Aug14 17:02] kauditd_printk_skb: 184 callbacks suppressed
	[  +9.902708] kauditd_printk_skb: 34 callbacks suppressed
	[  +2.962734] systemd-fstab-generator[3866]: Ignoring "noauto" option for root device
	[ +20.933494] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1112d232a855de91079496d16db1b2dad08932f18afaf02e62ccf6f32bd12429] <==
	{"level":"info","ts":"2024-08-14T17:02:01.400307Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4bc1bccd4ea9d8cb","local-member-id":"74e924d55c832457","added-peer-id":"74e924d55c832457","added-peer-peer-urls":["https://192.168.39.36:2380"]}
	{"level":"info","ts":"2024-08-14T17:02:01.400421Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4bc1bccd4ea9d8cb","local-member-id":"74e924d55c832457","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:02:01.400463Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:02:01.410539Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:02:01.415745Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-14T17:02:01.418010Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"74e924d55c832457","initial-advertise-peer-urls":["https://192.168.39.36:2380"],"listen-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.36:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-14T17:02:01.418050Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T17:02:01.418175Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-08-14T17:02:01.418194Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-08-14T17:02:02.580603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-14T17:02:02.580674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-14T17:02:02.580691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 received MsgPreVoteResp from 74e924d55c832457 at term 2"}
	{"level":"info","ts":"2024-08-14T17:02:02.580703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became candidate at term 3"}
	{"level":"info","ts":"2024-08-14T17:02:02.580710Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 received MsgVoteResp from 74e924d55c832457 at term 3"}
	{"level":"info","ts":"2024-08-14T17:02:02.580719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became leader at term 3"}
	{"level":"info","ts":"2024-08-14T17:02:02.580751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 74e924d55c832457 elected leader 74e924d55c832457 at term 3"}
	{"level":"info","ts":"2024-08-14T17:02:02.586163Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"74e924d55c832457","local-member-attributes":"{Name:multinode-986999 ClientURLs:[https://192.168.39.36:2379]}","request-path":"/0/members/74e924d55c832457/attributes","cluster-id":"4bc1bccd4ea9d8cb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T17:02:02.586180Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T17:02:02.586463Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T17:02:02.586911Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T17:02:02.586943Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T17:02:02.587609Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:02:02.587609Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:02:02.588579Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.36:2379"}
	{"level":"info","ts":"2024-08-14T17:02:02.588782Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c] <==
	{"level":"info","ts":"2024-08-14T16:54:53.647316Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T16:54:53.649628Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T16:54:53.652604Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T16:54:53.654237Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.36:2379"}
	{"level":"info","ts":"2024-08-14T16:54:53.671704Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T16:54:53.671736Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-08-14T16:55:50.055318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.987271ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2618721488912790211 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-986999-m02.17eba6ae44e60e9f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-986999-m02.17eba6ae44e60e9f\" value_size:646 lease:2618721488912789199 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-14T16:55:50.055675Z","caller":"traceutil/trace.go:171","msg":"trace[1966739086] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"228.08287ms","start":"2024-08-14T16:55:49.827567Z","end":"2024-08-14T16:55:50.055650Z","steps":["trace[1966739086] 'process raft request'  (duration: 79.397328ms)","trace[1966739086] 'compare'  (duration: 147.821656ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-14T16:55:56.662732Z","caller":"traceutil/trace.go:171","msg":"trace[868545671] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"112.58842ms","start":"2024-08-14T16:55:56.550125Z","end":"2024-08-14T16:55:56.662713Z","steps":["trace[868545671] 'process raft request'  (duration: 112.461145ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:55:57.233129Z","caller":"traceutil/trace.go:171","msg":"trace[299387947] linearizableReadLoop","detail":"{readStateIndex:536; appliedIndex:535; }","duration":"208.801852ms","start":"2024-08-14T16:55:57.024313Z","end":"2024-08-14T16:55:57.233115Z","steps":["trace[299387947] 'read index received'  (duration: 208.681574ms)","trace[299387947] 'applied index is now lower than readState.Index'  (duration: 119.462µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T16:55:57.233348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.015427ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-986999-m02\" ","response":"range_response_count:1 size:2885"}
	{"level":"info","ts":"2024-08-14T16:55:57.233392Z","caller":"traceutil/trace.go:171","msg":"trace[975187225] range","detail":"{range_begin:/registry/minions/multinode-986999-m02; range_end:; response_count:1; response_revision:514; }","duration":"209.074953ms","start":"2024-08-14T16:55:57.024310Z","end":"2024-08-14T16:55:57.233385Z","steps":["trace[975187225] 'agreement among raft nodes before linearized reading'  (duration: 208.952448ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:57:12.500328Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.056811ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:57:12.500401Z","caller":"traceutil/trace.go:171","msg":"trace[1890852243] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:640; }","duration":"131.184918ms","start":"2024-08-14T16:57:12.369201Z","end":"2024-08-14T16:57:12.500386Z","steps":["trace[1890852243] 'range keys from in-memory index tree'  (duration: 131.036668ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:57:12.500459Z","caller":"traceutil/trace.go:171","msg":"trace[1409628696] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"210.780232ms","start":"2024-08-14T16:57:12.289669Z","end":"2024-08-14T16:57:12.500449Z","steps":["trace[1409628696] 'process raft request'  (duration: 126.714541ms)","trace[1409628696] 'compare'  (duration: 83.820132ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-14T17:00:26.067348Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-14T17:00:26.067456Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-986999","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"]}
	{"level":"warn","ts":"2024-08-14T17:00:26.067560Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T17:00:26.067674Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T17:00:26.126501Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.36:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T17:00:26.126550Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.36:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-14T17:00:26.128283Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"74e924d55c832457","current-leader-member-id":"74e924d55c832457"}
	{"level":"info","ts":"2024-08-14T17:00:26.130875Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-08-14T17:00:26.131083Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-08-14T17:00:26.131116Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-986999","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"]}
	
	
	==> kernel <==
	 17:03:43 up 9 min,  0 users,  load average: 0.31, 0.23, 0.12
	Linux multinode-986999 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15] <==
	I0814 16:59:38.461055       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	I0814 16:59:48.457517       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 16:59:48.457598       1 main.go:299] handling current node
	I0814 16:59:48.457638       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 16:59:48.457646       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 16:59:48.457803       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0814 16:59:48.457823       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	I0814 16:59:58.459549       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 16:59:58.459652       1 main.go:299] handling current node
	I0814 16:59:58.459685       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 16:59:58.459703       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 16:59:58.459946       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0814 16:59:58.459981       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	I0814 17:00:08.455872       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:00:08.455959       1 main.go:299] handling current node
	I0814 17:00:08.455977       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:00:08.455983       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:00:08.456148       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0814 17:00:08.456169       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	I0814 17:00:18.464328       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:00:18.464456       1 main.go:299] handling current node
	I0814 17:00:18.464496       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:00:18.464502       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:00:18.464713       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0814 17:00:18.464721       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [b3b00eff2e1f5a6ebafac3003a2f80b57798117d69a2cb39aab343f964cace12] <==
	I0814 17:02:56.053358       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	I0814 17:03:06.049807       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:03:06.049960       1 main.go:299] handling current node
	I0814 17:03:06.049995       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:03:06.050014       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:03:06.050157       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0814 17:03:06.050180       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	I0814 17:03:16.049990       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0814 17:03:16.050132       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	I0814 17:03:16.050285       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:03:16.050325       1 main.go:299] handling current node
	I0814 17:03:16.050355       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:03:16.050372       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:03:26.051994       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:03:26.052206       1 main.go:299] handling current node
	I0814 17:03:26.052225       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:03:26.052245       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:03:26.052456       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0814 17:03:26.052488       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.2.0/24] 
	I0814 17:03:36.052488       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:03:36.052563       1 main.go:299] handling current node
	I0814 17:03:36.052584       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:03:36.052591       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:03:36.052812       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0814 17:03:36.052837       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6] <==
	I0814 16:54:56.321499       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0814 16:54:56.321533       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0814 16:54:56.878000       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 16:54:56.929486       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 16:54:57.028363       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0814 16:54:57.043628       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.36]
	I0814 16:54:57.044724       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 16:54:57.052015       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0814 16:54:57.382497       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0814 16:54:57.826291       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0814 16:54:57.850579       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0814 16:54:57.859201       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0814 16:55:02.885484       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0814 16:55:03.151069       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0814 16:56:45.723455       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39616: use of closed network connection
	E0814 16:56:45.889742       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39628: use of closed network connection
	E0814 16:56:46.071958       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39644: use of closed network connection
	E0814 16:56:46.234408       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39670: use of closed network connection
	E0814 16:56:46.403309       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39684: use of closed network connection
	E0814 16:56:46.563579       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39690: use of closed network connection
	E0814 16:56:46.868373       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39710: use of closed network connection
	E0814 16:56:47.040370       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39720: use of closed network connection
	E0814 16:56:47.208082       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39738: use of closed network connection
	E0814 16:56:47.370251       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39752: use of closed network connection
	I0814 17:00:26.063195       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [a018e5ee7d09971a63b0a8f3373f4295514885455f4d14e303d0475276c613f1] <==
	I0814 17:02:03.852322       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 17:02:03.852425       1 policy_source.go:224] refreshing policies
	I0814 17:02:03.861269       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0814 17:02:03.863489       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0814 17:02:03.863554       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 17:02:03.867704       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0814 17:02:03.868238       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0814 17:02:03.869172       1 shared_informer.go:320] Caches are synced for configmaps
	I0814 17:02:03.873352       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0814 17:02:03.873641       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0814 17:02:03.873703       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0814 17:02:03.881973       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 17:02:03.903819       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0814 17:02:03.904019       1 aggregator.go:171] initial CRD sync complete...
	I0814 17:02:03.904050       1 autoregister_controller.go:144] Starting autoregister controller
	I0814 17:02:03.904056       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0814 17:02:03.904061       1 cache.go:39] Caches are synced for autoregister controller
	I0814 17:02:04.782158       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0814 17:02:05.979606       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0814 17:02:06.111177       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0814 17:02:06.122495       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0814 17:02:06.185112       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 17:02:06.194220       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 17:02:07.326795       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 17:02:07.518741       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2b9a993eb9457bbb830323abdb835c9e4cc6ee50aed085f14af5c2228577a473] <==
	I0814 17:03:01.414529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m02"
	I0814 17:03:01.421331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="45.417µs"
	I0814 17:03:01.435585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.181µs"
	I0814 17:03:02.197635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m02"
	I0814 17:03:05.135985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.580114ms"
	I0814 17:03:05.136246       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.088µs"
	I0814 17:03:12.794263       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m02"
	I0814 17:03:19.100726       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:19.122295       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:19.343383       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 17:03:19.343565       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:20.811666       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-986999-m03\" does not exist"
	I0814 17:03:20.812154       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 17:03:20.828717       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-986999-m03" podCIDRs=["10.244.2.0/24"]
	I0814 17:03:20.829371       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:20.829486       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:20.840120       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:20.858167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:21.213335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:22.247596       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:31.258358       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:40.212347       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:40.212618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 17:03:40.224561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:42.217241       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	
	
	==> kube-controller-manager [89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b] <==
	I0814 16:58:00.822683       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 16:58:00.822697       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:01.925482       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-986999-m03\" does not exist"
	I0814 16:58:01.930636       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 16:58:01.935484       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-986999-m03" podCIDRs=["10.244.4.0/24"]
	I0814 16:58:01.935947       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:01.936136       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:01.947652       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:02.305000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:02.340071       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:02.616647       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:12.217023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:20.291349       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:20.291345       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 16:58:20.301711       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:22.304277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:59:07.322913       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 16:59:07.323288       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:59:07.331003       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m02"
	I0814 16:59:07.344172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:59:07.349982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m02"
	I0814 16:59:07.390940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.982677ms"
	I0814 16:59:07.391521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.908µs"
	I0814 16:59:12.491493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:59:22.562405       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m02"
	
	
	==> kube-proxy [065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 16:55:04.733623       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 16:55:04.743192       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	E0814 16:55:04.743285       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 16:55:04.770212       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 16:55:04.770345       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 16:55:04.770388       1 server_linux.go:169] "Using iptables Proxier"
	I0814 16:55:04.772509       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 16:55:04.772838       1 server.go:483] "Version info" version="v1.31.0"
	I0814 16:55:04.772980       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:55:04.774470       1 config.go:197] "Starting service config controller"
	I0814 16:55:04.774507       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 16:55:04.774528       1 config.go:104] "Starting endpoint slice config controller"
	I0814 16:55:04.774532       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 16:55:04.776243       1 config.go:326] "Starting node config controller"
	I0814 16:55:04.776270       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 16:55:04.874702       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 16:55:04.874717       1 shared_informer.go:320] Caches are synced for service config
	I0814 16:55:04.876421       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c8e4b77fe8c4c74a9ab92cafdf2ebee61958c4f16d8258caf39d207a7f149da3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 17:02:05.313064       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 17:02:05.327157       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	E0814 17:02:05.330274       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 17:02:05.404169       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 17:02:05.404258       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 17:02:05.404299       1 server_linux.go:169] "Using iptables Proxier"
	I0814 17:02:05.406514       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 17:02:05.406832       1 server.go:483] "Version info" version="v1.31.0"
	I0814 17:02:05.407045       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:02:05.408697       1 config.go:197] "Starting service config controller"
	I0814 17:02:05.409049       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 17:02:05.409187       1 config.go:104] "Starting endpoint slice config controller"
	I0814 17:02:05.409229       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 17:02:05.409719       1 config.go:326] "Starting node config controller"
	I0814 17:02:05.409755       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 17:02:05.510161       1 shared_informer.go:320] Caches are synced for node config
	I0814 17:02:05.510208       1 shared_informer.go:320] Caches are synced for service config
	I0814 17:02:05.510235       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391] <==
	E0814 16:54:55.396646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.319636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 16:54:56.319685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.327490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 16:54:56.327546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.390167       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 16:54:56.390213       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 16:54:56.439514       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 16:54:56.439561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.537242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 16:54:56.537304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.599250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 16:54:56.599299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.605730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 16:54:56.605781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.619032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 16:54:56.619081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.625089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 16:54:56.625132       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.665398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 16:54:56.665433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.691909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 16:54:56.692049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0814 16:54:59.290045       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0814 17:00:26.077834       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ca92dab795508a7cd6103305623d25d2fffaa671df4ba15094a97c1296844947] <==
	I0814 17:02:01.980216       1 serving.go:386] Generated self-signed cert in-memory
	W0814 17:02:03.816002       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0814 17:02:03.816216       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 17:02:03.816293       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 17:02:03.816325       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 17:02:03.886874       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0814 17:02:03.887107       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:02:03.891485       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0814 17:02:03.891625       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 17:02:03.891669       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 17:02:03.891702       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 17:02:03.992076       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 17:02:10 multinode-986999 kubelet[3026]: E0814 17:02:10.477937    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654930475825314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:02:14 multinode-986999 kubelet[3026]: I0814 17:02:14.645540    3026 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 14 17:02:20 multinode-986999 kubelet[3026]: E0814 17:02:20.479757    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654940479028067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:02:20 multinode-986999 kubelet[3026]: E0814 17:02:20.479794    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654940479028067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:02:30 multinode-986999 kubelet[3026]: E0814 17:02:30.482856    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654950481968598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:02:30 multinode-986999 kubelet[3026]: E0814 17:02:30.483257    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654950481968598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:02:40 multinode-986999 kubelet[3026]: E0814 17:02:40.486055    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654960484668438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:02:40 multinode-986999 kubelet[3026]: E0814 17:02:40.486153    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654960484668438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:02:50 multinode-986999 kubelet[3026]: E0814 17:02:50.488966    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654970488194531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:02:50 multinode-986999 kubelet[3026]: E0814 17:02:50.489009    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654970488194531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:03:00 multinode-986999 kubelet[3026]: E0814 17:03:00.423993    3026 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 17:03:00 multinode-986999 kubelet[3026]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 17:03:00 multinode-986999 kubelet[3026]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 17:03:00 multinode-986999 kubelet[3026]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 17:03:00 multinode-986999 kubelet[3026]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 17:03:00 multinode-986999 kubelet[3026]: E0814 17:03:00.491730    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654980491136565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:03:00 multinode-986999 kubelet[3026]: E0814 17:03:00.492029    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654980491136565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:03:10 multinode-986999 kubelet[3026]: E0814 17:03:10.494092    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654990493489684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:03:10 multinode-986999 kubelet[3026]: E0814 17:03:10.494451    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723654990493489684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:03:20 multinode-986999 kubelet[3026]: E0814 17:03:20.496613    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655000495947005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:03:20 multinode-986999 kubelet[3026]: E0814 17:03:20.496655    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655000495947005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:03:30 multinode-986999 kubelet[3026]: E0814 17:03:30.500992    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655010499590969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:03:30 multinode-986999 kubelet[3026]: E0814 17:03:30.501554    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655010499590969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:03:40 multinode-986999 kubelet[3026]: E0814 17:03:40.503119    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655020502623820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:03:40 multinode-986999 kubelet[3026]: E0814 17:03:40.503174    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655020502623820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:03:42.638321   51286 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19446-13977/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-986999 -n multinode-986999
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-986999 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (321.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 stop
E0814 17:04:29.461847   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-986999 stop: exit status 82 (2m0.465323702s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-986999-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-986999 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-986999 status: exit status 3 (18.774185567s)

                                                
                                                
-- stdout --
	multinode-986999
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-986999-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:06:05.819719   51946 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.2:22: connect: no route to host
	E0814 17:06:05.819755   51946 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.2:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-986999 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-986999 -n multinode-986999
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-986999 logs -n 25: (1.454460796s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp multinode-986999-m02:/home/docker/cp-test.txt                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999:/home/docker/cp-test_multinode-986999-m02_multinode-986999.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n multinode-986999 sudo cat                                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | /home/docker/cp-test_multinode-986999-m02_multinode-986999.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp multinode-986999-m02:/home/docker/cp-test.txt                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m03:/home/docker/cp-test_multinode-986999-m02_multinode-986999-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n multinode-986999-m03 sudo cat                                   | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | /home/docker/cp-test_multinode-986999-m02_multinode-986999-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp testdata/cp-test.txt                                                | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp multinode-986999-m03:/home/docker/cp-test.txt                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3655799611/001/cp-test_multinode-986999-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp multinode-986999-m03:/home/docker/cp-test.txt                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999:/home/docker/cp-test_multinode-986999-m03_multinode-986999.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n multinode-986999 sudo cat                                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | /home/docker/cp-test_multinode-986999-m03_multinode-986999.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-986999 cp multinode-986999-m03:/home/docker/cp-test.txt                       | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m02:/home/docker/cp-test_multinode-986999-m03_multinode-986999-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n                                                                 | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | multinode-986999-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-986999 ssh -n multinode-986999-m02 sudo cat                                   | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	|         | /home/docker/cp-test_multinode-986999-m03_multinode-986999-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-986999 node stop m03                                                          | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:57 UTC |
	| node    | multinode-986999 node start                                                             | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:57 UTC | 14 Aug 24 16:58 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-986999                                                                | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:58 UTC |                     |
	| stop    | -p multinode-986999                                                                     | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 16:58 UTC |                     |
	| start   | -p multinode-986999                                                                     | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 17:00 UTC | 14 Aug 24 17:03 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-986999                                                                | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 17:03 UTC |                     |
	| node    | multinode-986999 node delete                                                            | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 17:03 UTC | 14 Aug 24 17:03 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-986999 stop                                                                   | multinode-986999 | jenkins | v1.33.1 | 14 Aug 24 17:03 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 17:00:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 17:00:25.035973   50203 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:00:25.036221   50203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:00:25.036230   50203 out.go:304] Setting ErrFile to fd 2...
	I0814 17:00:25.036237   50203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:00:25.036454   50203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:00:25.037052   50203 out.go:298] Setting JSON to false
	I0814 17:00:25.037979   50203 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6169,"bootTime":1723648656,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:00:25.038041   50203 start.go:139] virtualization: kvm guest
	I0814 17:00:25.040189   50203 out.go:177] * [multinode-986999] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:00:25.041435   50203 notify.go:220] Checking for updates...
	I0814 17:00:25.041472   50203 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:00:25.042824   50203 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:00:25.044184   50203 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:00:25.045328   50203 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:00:25.046521   50203 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:00:25.047826   50203 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:00:25.049406   50203 config.go:182] Loaded profile config "multinode-986999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:00:25.049512   50203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:00:25.049955   50203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:00:25.050003   50203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:00:25.066222   50203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36659
	I0814 17:00:25.066699   50203 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:00:25.067252   50203 main.go:141] libmachine: Using API Version  1
	I0814 17:00:25.067281   50203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:00:25.067695   50203 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:00:25.067916   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:00:25.103209   50203 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 17:00:25.104408   50203 start.go:297] selected driver: kvm2
	I0814 17:00:25.104426   50203 start.go:901] validating driver "kvm2" against &{Name:multinode-986999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-986999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:00:25.104563   50203 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:00:25.104903   50203 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:00:25.104975   50203 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:00:25.120177   50203 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:00:25.121297   50203 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:00:25.121356   50203 cni.go:84] Creating CNI manager for ""
	I0814 17:00:25.121368   50203 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0814 17:00:25.121431   50203 start.go:340] cluster config:
	{Name:multinode-986999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-986999 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:00:25.121577   50203 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:00:25.123425   50203 out.go:177] * Starting "multinode-986999" primary control-plane node in "multinode-986999" cluster
	I0814 17:00:25.124732   50203 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:00:25.124767   50203 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:00:25.124782   50203 cache.go:56] Caching tarball of preloaded images
	I0814 17:00:25.124889   50203 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:00:25.124903   50203 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 17:00:25.125024   50203 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/config.json ...
	I0814 17:00:25.125231   50203 start.go:360] acquireMachinesLock for multinode-986999: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:00:25.125278   50203 start.go:364] duration metric: took 29.372µs to acquireMachinesLock for "multinode-986999"
	I0814 17:00:25.125300   50203 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:00:25.125314   50203 fix.go:54] fixHost starting: 
	I0814 17:00:25.125585   50203 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:00:25.125620   50203 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:00:25.139754   50203 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32843
	I0814 17:00:25.140256   50203 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:00:25.140841   50203 main.go:141] libmachine: Using API Version  1
	I0814 17:00:25.140868   50203 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:00:25.141153   50203 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:00:25.141354   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:00:25.141647   50203 main.go:141] libmachine: (multinode-986999) Calling .GetState
	I0814 17:00:25.143192   50203 fix.go:112] recreateIfNeeded on multinode-986999: state=Running err=<nil>
	W0814 17:00:25.143217   50203 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:00:25.145217   50203 out.go:177] * Updating the running kvm2 "multinode-986999" VM ...
	I0814 17:00:25.146456   50203 machine.go:94] provisionDockerMachine start ...
	I0814 17:00:25.146483   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:00:25.146694   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:00:25.149148   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.149626   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.149657   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.149848   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:00:25.150011   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.150161   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.150304   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:00:25.150455   50203 main.go:141] libmachine: Using SSH client type: native
	I0814 17:00:25.150730   50203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0814 17:00:25.150747   50203 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:00:25.260389   50203 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-986999
	
	I0814 17:00:25.260422   50203 main.go:141] libmachine: (multinode-986999) Calling .GetMachineName
	I0814 17:00:25.260783   50203 buildroot.go:166] provisioning hostname "multinode-986999"
	I0814 17:00:25.260813   50203 main.go:141] libmachine: (multinode-986999) Calling .GetMachineName
	I0814 17:00:25.261016   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:00:25.263795   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.264221   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.264261   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.264370   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:00:25.264561   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.264707   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.264828   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:00:25.265003   50203 main.go:141] libmachine: Using SSH client type: native
	I0814 17:00:25.265176   50203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0814 17:00:25.265189   50203 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-986999 && echo "multinode-986999" | sudo tee /etc/hostname
	I0814 17:00:25.387512   50203 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-986999
	
	I0814 17:00:25.387549   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:00:25.390615   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.391038   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.391069   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.391245   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:00:25.391439   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.391551   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.391694   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:00:25.391854   50203 main.go:141] libmachine: Using SSH client type: native
	I0814 17:00:25.392045   50203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0814 17:00:25.392070   50203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-986999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-986999/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-986999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:00:25.499936   50203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:00:25.499998   50203 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:00:25.500058   50203 buildroot.go:174] setting up certificates
	I0814 17:00:25.500071   50203 provision.go:84] configureAuth start
	I0814 17:00:25.500089   50203 main.go:141] libmachine: (multinode-986999) Calling .GetMachineName
	I0814 17:00:25.500377   50203 main.go:141] libmachine: (multinode-986999) Calling .GetIP
	I0814 17:00:25.502738   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.503033   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.503062   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.503215   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:00:25.505716   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.506023   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.506060   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.506221   50203 provision.go:143] copyHostCerts
	I0814 17:00:25.506256   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:00:25.506305   50203 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:00:25.506318   50203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:00:25.506385   50203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:00:25.506481   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:00:25.506500   50203 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:00:25.506504   50203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:00:25.506529   50203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:00:25.506591   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:00:25.506607   50203 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:00:25.506612   50203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:00:25.506641   50203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:00:25.506691   50203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.multinode-986999 san=[127.0.0.1 192.168.39.36 localhost minikube multinode-986999]
	I0814 17:00:25.783493   50203 provision.go:177] copyRemoteCerts
	I0814 17:00:25.783554   50203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:00:25.783581   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:00:25.786295   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.786646   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.786676   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.786877   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:00:25.787080   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.787237   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:00:25.787379   50203 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/multinode-986999/id_rsa Username:docker}
	I0814 17:00:25.871094   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0814 17:00:25.871184   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:00:25.898646   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0814 17:00:25.898721   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0814 17:00:25.924679   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0814 17:00:25.924772   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:00:25.946983   50203 provision.go:87] duration metric: took 446.898202ms to configureAuth
	I0814 17:00:25.947007   50203 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:00:25.947219   50203 config.go:182] Loaded profile config "multinode-986999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:00:25.947295   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:00:25.949721   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.950091   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:00:25.950124   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:00:25.950269   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:00:25.950534   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.950689   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:00:25.950810   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:00:25.950967   50203 main.go:141] libmachine: Using SSH client type: native
	I0814 17:00:25.951163   50203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0814 17:00:25.951186   50203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:01:56.585590   50203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:01:56.585616   50203 machine.go:97] duration metric: took 1m31.439141262s to provisionDockerMachine
	I0814 17:01:56.585636   50203 start.go:293] postStartSetup for "multinode-986999" (driver="kvm2")
	I0814 17:01:56.585646   50203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:01:56.585661   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:01:56.585996   50203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:01:56.586049   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:01:56.589370   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.589874   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:01:56.589898   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.590071   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:01:56.590270   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:01:56.590430   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:01:56.590578   50203 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/multinode-986999/id_rsa Username:docker}
	I0814 17:01:56.675273   50203 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:01:56.679571   50203 command_runner.go:130] > NAME=Buildroot
	I0814 17:01:56.679591   50203 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0814 17:01:56.679598   50203 command_runner.go:130] > ID=buildroot
	I0814 17:01:56.679605   50203 command_runner.go:130] > VERSION_ID=2023.02.9
	I0814 17:01:56.679614   50203 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0814 17:01:56.679697   50203 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:01:56.679723   50203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:01:56.679807   50203 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:01:56.679913   50203 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:01:56.679926   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /etc/ssl/certs/211772.pem
	I0814 17:01:56.680074   50203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:01:56.688797   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:01:56.711346   50203 start.go:296] duration metric: took 125.697537ms for postStartSetup
	I0814 17:01:56.711396   50203 fix.go:56] duration metric: took 1m31.586084899s for fixHost
	I0814 17:01:56.711440   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:01:56.714369   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.714807   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:01:56.714840   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.715094   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:01:56.715308   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:01:56.715542   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:01:56.715704   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:01:56.715926   50203 main.go:141] libmachine: Using SSH client type: native
	I0814 17:01:56.716124   50203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.36 22 <nil> <nil>}
	I0814 17:01:56.716136   50203 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:01:56.819457   50203 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723654916.793494287
	
	I0814 17:01:56.819477   50203 fix.go:216] guest clock: 1723654916.793494287
	I0814 17:01:56.819486   50203 fix.go:229] Guest: 2024-08-14 17:01:56.793494287 +0000 UTC Remote: 2024-08-14 17:01:56.711401758 +0000 UTC m=+91.710240206 (delta=82.092529ms)
	I0814 17:01:56.819547   50203 fix.go:200] guest clock delta is within tolerance: 82.092529ms
	I0814 17:01:56.819555   50203 start.go:83] releasing machines lock for "multinode-986999", held for 1m31.69426376s
	I0814 17:01:56.819712   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:01:56.820013   50203 main.go:141] libmachine: (multinode-986999) Calling .GetIP
	I0814 17:01:56.822586   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.822954   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:01:56.822977   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.823147   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:01:56.823786   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:01:56.823950   50203 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 17:01:56.824040   50203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:01:56.824100   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:01:56.824163   50203 ssh_runner.go:195] Run: cat /version.json
	I0814 17:01:56.824188   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 17:01:56.826750   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.827101   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:01:56.827129   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.827165   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.827257   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:01:56.827440   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:01:56.827615   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:01:56.827698   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:01:56.827722   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:56.827783   50203 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/multinode-986999/id_rsa Username:docker}
	I0814 17:01:56.827911   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 17:01:56.828078   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 17:01:56.828220   50203 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 17:01:56.828355   50203 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/multinode-986999/id_rsa Username:docker}
	I0814 17:01:56.942928   50203 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0814 17:01:56.942973   50203 command_runner.go:130] > {"iso_version": "v1.33.1-1723567878-19429", "kicbase_version": "v0.0.44-1723026928-19389", "minikube_version": "v1.33.1", "commit": "99323a71d52eff08226c75fcaff04297eb5d3584"}
	I0814 17:01:56.943120   50203 ssh_runner.go:195] Run: systemctl --version
	I0814 17:01:56.948890   50203 command_runner.go:130] > systemd 252 (252)
	I0814 17:01:56.948924   50203 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0814 17:01:56.948973   50203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:01:57.103505   50203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0814 17:01:57.110190   50203 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0814 17:01:57.110460   50203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:01:57.110524   50203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:01:57.119349   50203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0814 17:01:57.119371   50203 start.go:495] detecting cgroup driver to use...
	I0814 17:01:57.119435   50203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:01:57.136765   50203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:01:57.150130   50203 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:01:57.150188   50203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:01:57.163754   50203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:01:57.176624   50203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:01:57.329437   50203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:01:57.477998   50203 docker.go:233] disabling docker service ...
	I0814 17:01:57.478080   50203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:01:57.494616   50203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:01:57.508171   50203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:01:57.643664   50203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:01:57.777641   50203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:01:57.791089   50203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:01:57.823066   50203 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0814 17:01:57.823123   50203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:01:57.823164   50203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.833292   50203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:01:57.833388   50203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.842912   50203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.852307   50203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.861662   50203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:01:57.871552   50203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.881077   50203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.891338   50203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:01:57.901348   50203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:01:57.914134   50203 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0814 17:01:57.914269   50203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:01:57.923005   50203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:01:58.060734   50203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:01:58.285074   50203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:01:58.285146   50203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:01:58.289694   50203 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0814 17:01:58.289722   50203 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0814 17:01:58.289730   50203 command_runner.go:130] > Device: 0,22	Inode: 1333        Links: 1
	I0814 17:01:58.289740   50203 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0814 17:01:58.289751   50203 command_runner.go:130] > Access: 2024-08-14 17:01:58.160329986 +0000
	I0814 17:01:58.289760   50203 command_runner.go:130] > Modify: 2024-08-14 17:01:58.160329986 +0000
	I0814 17:01:58.289772   50203 command_runner.go:130] > Change: 2024-08-14 17:01:58.160329986 +0000
	I0814 17:01:58.289778   50203 command_runner.go:130] >  Birth: -
	I0814 17:01:58.290018   50203 start.go:563] Will wait 60s for crictl version
	I0814 17:01:58.290065   50203 ssh_runner.go:195] Run: which crictl
	I0814 17:01:58.293348   50203 command_runner.go:130] > /usr/bin/crictl
	I0814 17:01:58.293425   50203 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:01:58.331476   50203 command_runner.go:130] > Version:  0.1.0
	I0814 17:01:58.331505   50203 command_runner.go:130] > RuntimeName:  cri-o
	I0814 17:01:58.331510   50203 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0814 17:01:58.331515   50203 command_runner.go:130] > RuntimeApiVersion:  v1
	I0814 17:01:58.331525   50203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:01:58.331589   50203 ssh_runner.go:195] Run: crio --version
	I0814 17:01:58.358116   50203 command_runner.go:130] > crio version 1.29.1
	I0814 17:01:58.358142   50203 command_runner.go:130] > Version:        1.29.1
	I0814 17:01:58.358148   50203 command_runner.go:130] > GitCommit:      unknown
	I0814 17:01:58.358153   50203 command_runner.go:130] > GitCommitDate:  unknown
	I0814 17:01:58.358157   50203 command_runner.go:130] > GitTreeState:   clean
	I0814 17:01:58.358163   50203 command_runner.go:130] > BuildDate:      2024-08-13T22:49:54Z
	I0814 17:01:58.358167   50203 command_runner.go:130] > GoVersion:      go1.21.6
	I0814 17:01:58.358171   50203 command_runner.go:130] > Compiler:       gc
	I0814 17:01:58.358176   50203 command_runner.go:130] > Platform:       linux/amd64
	I0814 17:01:58.358180   50203 command_runner.go:130] > Linkmode:       dynamic
	I0814 17:01:58.358184   50203 command_runner.go:130] > BuildTags:      
	I0814 17:01:58.358191   50203 command_runner.go:130] >   containers_image_ostree_stub
	I0814 17:01:58.358197   50203 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0814 17:01:58.358203   50203 command_runner.go:130] >   btrfs_noversion
	I0814 17:01:58.358213   50203 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0814 17:01:58.358220   50203 command_runner.go:130] >   libdm_no_deferred_remove
	I0814 17:01:58.358230   50203 command_runner.go:130] >   seccomp
	I0814 17:01:58.358236   50203 command_runner.go:130] > LDFlags:          unknown
	I0814 17:01:58.358240   50203 command_runner.go:130] > SeccompEnabled:   true
	I0814 17:01:58.358244   50203 command_runner.go:130] > AppArmorEnabled:  false
	I0814 17:01:58.358339   50203 ssh_runner.go:195] Run: crio --version
	I0814 17:01:58.384810   50203 command_runner.go:130] > crio version 1.29.1
	I0814 17:01:58.384837   50203 command_runner.go:130] > Version:        1.29.1
	I0814 17:01:58.384846   50203 command_runner.go:130] > GitCommit:      unknown
	I0814 17:01:58.384851   50203 command_runner.go:130] > GitCommitDate:  unknown
	I0814 17:01:58.384855   50203 command_runner.go:130] > GitTreeState:   clean
	I0814 17:01:58.384860   50203 command_runner.go:130] > BuildDate:      2024-08-13T22:49:54Z
	I0814 17:01:58.384864   50203 command_runner.go:130] > GoVersion:      go1.21.6
	I0814 17:01:58.384871   50203 command_runner.go:130] > Compiler:       gc
	I0814 17:01:58.384878   50203 command_runner.go:130] > Platform:       linux/amd64
	I0814 17:01:58.384890   50203 command_runner.go:130] > Linkmode:       dynamic
	I0814 17:01:58.384901   50203 command_runner.go:130] > BuildTags:      
	I0814 17:01:58.384908   50203 command_runner.go:130] >   containers_image_ostree_stub
	I0814 17:01:58.384914   50203 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0814 17:01:58.384920   50203 command_runner.go:130] >   btrfs_noversion
	I0814 17:01:58.384924   50203 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0814 17:01:58.384929   50203 command_runner.go:130] >   libdm_no_deferred_remove
	I0814 17:01:58.384932   50203 command_runner.go:130] >   seccomp
	I0814 17:01:58.384936   50203 command_runner.go:130] > LDFlags:          unknown
	I0814 17:01:58.384940   50203 command_runner.go:130] > SeccompEnabled:   true
	I0814 17:01:58.384944   50203 command_runner.go:130] > AppArmorEnabled:  false
	I0814 17:01:58.388048   50203 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:01:58.389470   50203 main.go:141] libmachine: (multinode-986999) Calling .GetIP
	I0814 17:01:58.392357   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:58.392717   50203 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 17:01:58.392746   50203 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 17:01:58.392954   50203 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 17:01:58.397347   50203 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0814 17:01:58.397462   50203 kubeadm.go:883] updating cluster {Name:multinode-986999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-986999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:01:58.397618   50203 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:01:58.397685   50203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:01:58.436210   50203 command_runner.go:130] > {
	I0814 17:01:58.436243   50203 command_runner.go:130] >   "images": [
	I0814 17:01:58.436249   50203 command_runner.go:130] >     {
	I0814 17:01:58.436264   50203 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0814 17:01:58.436272   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.436281   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0814 17:01:58.436286   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436293   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.436337   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0814 17:01:58.436354   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0814 17:01:58.436361   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436370   50203 command_runner.go:130] >       "size": "87165492",
	I0814 17:01:58.436378   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.436386   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.436397   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.436407   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.436414   50203 command_runner.go:130] >     },
	I0814 17:01:58.436421   50203 command_runner.go:130] >     {
	I0814 17:01:58.436433   50203 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0814 17:01:58.436440   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.436450   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0814 17:01:58.436458   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436466   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.436480   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0814 17:01:58.436493   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0814 17:01:58.436500   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436507   50203 command_runner.go:130] >       "size": "87190579",
	I0814 17:01:58.436515   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.436526   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.436537   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.436545   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.436558   50203 command_runner.go:130] >     },
	I0814 17:01:58.436566   50203 command_runner.go:130] >     {
	I0814 17:01:58.436578   50203 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0814 17:01:58.436586   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.436595   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0814 17:01:58.436603   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436611   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.436624   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0814 17:01:58.436638   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0814 17:01:58.436646   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436655   50203 command_runner.go:130] >       "size": "1363676",
	I0814 17:01:58.436662   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.436670   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.436678   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.436686   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.436699   50203 command_runner.go:130] >     },
	I0814 17:01:58.436706   50203 command_runner.go:130] >     {
	I0814 17:01:58.436720   50203 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0814 17:01:58.436731   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.436743   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0814 17:01:58.436751   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436760   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.436777   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0814 17:01:58.436798   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0814 17:01:58.436808   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436816   50203 command_runner.go:130] >       "size": "31470524",
	I0814 17:01:58.436825   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.436835   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.436845   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.436856   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.436863   50203 command_runner.go:130] >     },
	I0814 17:01:58.436869   50203 command_runner.go:130] >     {
	I0814 17:01:58.436881   50203 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0814 17:01:58.436891   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.436901   50203 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0814 17:01:58.436908   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436916   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.436932   50203 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0814 17:01:58.436947   50203 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0814 17:01:58.436956   50203 command_runner.go:130] >       ],
	I0814 17:01:58.436964   50203 command_runner.go:130] >       "size": "61245718",
	I0814 17:01:58.436975   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.436983   50203 command_runner.go:130] >       "username": "nonroot",
	I0814 17:01:58.436993   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.437003   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.437013   50203 command_runner.go:130] >     },
	I0814 17:01:58.437020   50203 command_runner.go:130] >     {
	I0814 17:01:58.437031   50203 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0814 17:01:58.437042   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.437053   50203 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0814 17:01:58.437063   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437071   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.437086   50203 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0814 17:01:58.437101   50203 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0814 17:01:58.437109   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437117   50203 command_runner.go:130] >       "size": "149009664",
	I0814 17:01:58.437128   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.437136   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.437145   50203 command_runner.go:130] >       },
	I0814 17:01:58.437153   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.437164   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.437172   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.437181   50203 command_runner.go:130] >     },
	I0814 17:01:58.437189   50203 command_runner.go:130] >     {
	I0814 17:01:58.437200   50203 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0814 17:01:58.437206   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.437215   50203 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0814 17:01:58.437225   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437233   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.437250   50203 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0814 17:01:58.437265   50203 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0814 17:01:58.437274   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437283   50203 command_runner.go:130] >       "size": "95233506",
	I0814 17:01:58.437293   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.437306   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.437315   50203 command_runner.go:130] >       },
	I0814 17:01:58.437323   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.437333   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.437340   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.437348   50203 command_runner.go:130] >     },
	I0814 17:01:58.437358   50203 command_runner.go:130] >     {
	I0814 17:01:58.437370   50203 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0814 17:01:58.437381   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.437393   50203 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0814 17:01:58.437402   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437410   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.437437   50203 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0814 17:01:58.437454   50203 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0814 17:01:58.437461   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437469   50203 command_runner.go:130] >       "size": "89437512",
	I0814 17:01:58.437478   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.437488   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.437496   50203 command_runner.go:130] >       },
	I0814 17:01:58.437504   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.437547   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.437561   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.437568   50203 command_runner.go:130] >     },
	I0814 17:01:58.437575   50203 command_runner.go:130] >     {
	I0814 17:01:58.437586   50203 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0814 17:01:58.437594   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.437603   50203 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0814 17:01:58.437610   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437618   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.437631   50203 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0814 17:01:58.437643   50203 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0814 17:01:58.437651   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437661   50203 command_runner.go:130] >       "size": "92728217",
	I0814 17:01:58.437671   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.437682   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.437690   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.437701   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.437709   50203 command_runner.go:130] >     },
	I0814 17:01:58.437716   50203 command_runner.go:130] >     {
	I0814 17:01:58.437730   50203 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0814 17:01:58.437741   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.437751   50203 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0814 17:01:58.437762   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437773   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.437791   50203 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0814 17:01:58.437806   50203 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0814 17:01:58.437813   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437824   50203 command_runner.go:130] >       "size": "68420936",
	I0814 17:01:58.437835   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.437843   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.437851   50203 command_runner.go:130] >       },
	I0814 17:01:58.437860   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.437870   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.437877   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.437887   50203 command_runner.go:130] >     },
	I0814 17:01:58.437894   50203 command_runner.go:130] >     {
	I0814 17:01:58.437906   50203 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0814 17:01:58.437916   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.437925   50203 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0814 17:01:58.437934   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437942   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.437957   50203 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0814 17:01:58.437973   50203 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0814 17:01:58.437983   50203 command_runner.go:130] >       ],
	I0814 17:01:58.437992   50203 command_runner.go:130] >       "size": "742080",
	I0814 17:01:58.438002   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.438010   50203 command_runner.go:130] >         "value": "65535"
	I0814 17:01:58.438020   50203 command_runner.go:130] >       },
	I0814 17:01:58.438029   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.438039   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.438047   50203 command_runner.go:130] >       "pinned": true
	I0814 17:01:58.438056   50203 command_runner.go:130] >     }
	I0814 17:01:58.438063   50203 command_runner.go:130] >   ]
	I0814 17:01:58.438072   50203 command_runner.go:130] > }
	I0814 17:01:58.438263   50203 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:01:58.438278   50203 crio.go:433] Images already preloaded, skipping extraction
	I0814 17:01:58.438356   50203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:01:58.468720   50203 command_runner.go:130] > {
	I0814 17:01:58.468745   50203 command_runner.go:130] >   "images": [
	I0814 17:01:58.468751   50203 command_runner.go:130] >     {
	I0814 17:01:58.468764   50203 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0814 17:01:58.468770   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.468778   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0814 17:01:58.468783   50203 command_runner.go:130] >       ],
	I0814 17:01:58.468788   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.468801   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0814 17:01:58.468813   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0814 17:01:58.468822   50203 command_runner.go:130] >       ],
	I0814 17:01:58.468830   50203 command_runner.go:130] >       "size": "87165492",
	I0814 17:01:58.468838   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.468846   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.468858   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.468869   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.468877   50203 command_runner.go:130] >     },
	I0814 17:01:58.468885   50203 command_runner.go:130] >     {
	I0814 17:01:58.468898   50203 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0814 17:01:58.468905   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.468914   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0814 17:01:58.468923   50203 command_runner.go:130] >       ],
	I0814 17:01:58.468931   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.468946   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0814 17:01:58.468961   50203 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0814 17:01:58.468971   50203 command_runner.go:130] >       ],
	I0814 17:01:58.468980   50203 command_runner.go:130] >       "size": "87190579",
	I0814 17:01:58.468989   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.469000   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.469009   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469017   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469026   50203 command_runner.go:130] >     },
	I0814 17:01:58.469032   50203 command_runner.go:130] >     {
	I0814 17:01:58.469046   50203 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0814 17:01:58.469055   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.469065   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0814 17:01:58.469073   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469081   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.469095   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0814 17:01:58.469110   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0814 17:01:58.469119   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469127   50203 command_runner.go:130] >       "size": "1363676",
	I0814 17:01:58.469137   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.469146   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.469155   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469163   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469170   50203 command_runner.go:130] >     },
	I0814 17:01:58.469178   50203 command_runner.go:130] >     {
	I0814 17:01:58.469189   50203 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0814 17:01:58.469197   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.469208   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0814 17:01:58.469216   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469223   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.469239   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0814 17:01:58.469258   50203 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0814 17:01:58.469266   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469273   50203 command_runner.go:130] >       "size": "31470524",
	I0814 17:01:58.469283   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.469291   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.469300   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469308   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469335   50203 command_runner.go:130] >     },
	I0814 17:01:58.469344   50203 command_runner.go:130] >     {
	I0814 17:01:58.469354   50203 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0814 17:01:58.469361   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.469372   50203 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0814 17:01:58.469380   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469388   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.469403   50203 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0814 17:01:58.469419   50203 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0814 17:01:58.469428   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469436   50203 command_runner.go:130] >       "size": "61245718",
	I0814 17:01:58.469445   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.469453   50203 command_runner.go:130] >       "username": "nonroot",
	I0814 17:01:58.469463   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469471   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469479   50203 command_runner.go:130] >     },
	I0814 17:01:58.469486   50203 command_runner.go:130] >     {
	I0814 17:01:58.469498   50203 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0814 17:01:58.469508   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.469518   50203 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0814 17:01:58.469525   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469533   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.469552   50203 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0814 17:01:58.469566   50203 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0814 17:01:58.469573   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469582   50203 command_runner.go:130] >       "size": "149009664",
	I0814 17:01:58.469591   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.469600   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.469606   50203 command_runner.go:130] >       },
	I0814 17:01:58.469614   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.469623   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469632   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469641   50203 command_runner.go:130] >     },
	I0814 17:01:58.469648   50203 command_runner.go:130] >     {
	I0814 17:01:58.469660   50203 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0814 17:01:58.469669   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.469679   50203 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0814 17:01:58.469691   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469702   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.469716   50203 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0814 17:01:58.469732   50203 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0814 17:01:58.469740   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469748   50203 command_runner.go:130] >       "size": "95233506",
	I0814 17:01:58.469755   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.469764   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.469773   50203 command_runner.go:130] >       },
	I0814 17:01:58.469780   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.469787   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469796   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469803   50203 command_runner.go:130] >     },
	I0814 17:01:58.469809   50203 command_runner.go:130] >     {
	I0814 17:01:58.469820   50203 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0814 17:01:58.469837   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.469848   50203 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0814 17:01:58.469857   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469864   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.469887   50203 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0814 17:01:58.469903   50203 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0814 17:01:58.469911   50203 command_runner.go:130] >       ],
	I0814 17:01:58.469918   50203 command_runner.go:130] >       "size": "89437512",
	I0814 17:01:58.469927   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.469936   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.469944   50203 command_runner.go:130] >       },
	I0814 17:01:58.469952   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.469960   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.469968   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.469976   50203 command_runner.go:130] >     },
	I0814 17:01:58.469982   50203 command_runner.go:130] >     {
	I0814 17:01:58.469995   50203 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0814 17:01:58.470005   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.470014   50203 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0814 17:01:58.470023   50203 command_runner.go:130] >       ],
	I0814 17:01:58.470030   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.470043   50203 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0814 17:01:58.470057   50203 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0814 17:01:58.470066   50203 command_runner.go:130] >       ],
	I0814 17:01:58.470073   50203 command_runner.go:130] >       "size": "92728217",
	I0814 17:01:58.470082   50203 command_runner.go:130] >       "uid": null,
	I0814 17:01:58.470090   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.470099   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.470108   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.470115   50203 command_runner.go:130] >     },
	I0814 17:01:58.470121   50203 command_runner.go:130] >     {
	I0814 17:01:58.470133   50203 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0814 17:01:58.470139   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.470149   50203 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0814 17:01:58.470158   50203 command_runner.go:130] >       ],
	I0814 17:01:58.470166   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.470180   50203 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0814 17:01:58.470196   50203 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0814 17:01:58.470206   50203 command_runner.go:130] >       ],
	I0814 17:01:58.470215   50203 command_runner.go:130] >       "size": "68420936",
	I0814 17:01:58.470224   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.470233   50203 command_runner.go:130] >         "value": "0"
	I0814 17:01:58.470239   50203 command_runner.go:130] >       },
	I0814 17:01:58.470247   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.470257   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.470263   50203 command_runner.go:130] >       "pinned": false
	I0814 17:01:58.470271   50203 command_runner.go:130] >     },
	I0814 17:01:58.470278   50203 command_runner.go:130] >     {
	I0814 17:01:58.470292   50203 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0814 17:01:58.470301   50203 command_runner.go:130] >       "repoTags": [
	I0814 17:01:58.470310   50203 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0814 17:01:58.470323   50203 command_runner.go:130] >       ],
	I0814 17:01:58.470332   50203 command_runner.go:130] >       "repoDigests": [
	I0814 17:01:58.470347   50203 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0814 17:01:58.470361   50203 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0814 17:01:58.470370   50203 command_runner.go:130] >       ],
	I0814 17:01:58.470377   50203 command_runner.go:130] >       "size": "742080",
	I0814 17:01:58.470386   50203 command_runner.go:130] >       "uid": {
	I0814 17:01:58.470394   50203 command_runner.go:130] >         "value": "65535"
	I0814 17:01:58.470402   50203 command_runner.go:130] >       },
	I0814 17:01:58.470410   50203 command_runner.go:130] >       "username": "",
	I0814 17:01:58.470419   50203 command_runner.go:130] >       "spec": null,
	I0814 17:01:58.470427   50203 command_runner.go:130] >       "pinned": true
	I0814 17:01:58.470435   50203 command_runner.go:130] >     }
	I0814 17:01:58.470454   50203 command_runner.go:130] >   ]
	I0814 17:01:58.470462   50203 command_runner.go:130] > }
	I0814 17:01:58.470595   50203 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:01:58.470608   50203 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:01:58.470617   50203 kubeadm.go:934] updating node { 192.168.39.36 8443 v1.31.0 crio true true} ...
	I0814 17:01:58.470751   50203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-986999 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-986999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:01:58.470835   50203 ssh_runner.go:195] Run: crio config
	I0814 17:01:58.502180   50203 command_runner.go:130] ! time="2024-08-14 17:01:58.476078358Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0814 17:01:58.507441   50203 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0814 17:01:58.514220   50203 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0814 17:01:58.514241   50203 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0814 17:01:58.514248   50203 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0814 17:01:58.514251   50203 command_runner.go:130] > #
	I0814 17:01:58.514258   50203 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0814 17:01:58.514264   50203 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0814 17:01:58.514270   50203 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0814 17:01:58.514277   50203 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0814 17:01:58.514282   50203 command_runner.go:130] > # reload'.
	I0814 17:01:58.514291   50203 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0814 17:01:58.514301   50203 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0814 17:01:58.514314   50203 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0814 17:01:58.514323   50203 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0814 17:01:58.514328   50203 command_runner.go:130] > [crio]
	I0814 17:01:58.514337   50203 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0814 17:01:58.514344   50203 command_runner.go:130] > # containers images, in this directory.
	I0814 17:01:58.514354   50203 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0814 17:01:58.514365   50203 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0814 17:01:58.514377   50203 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0814 17:01:58.514388   50203 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0814 17:01:58.514393   50203 command_runner.go:130] > # imagestore = ""
	I0814 17:01:58.514405   50203 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0814 17:01:58.514412   50203 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0814 17:01:58.514420   50203 command_runner.go:130] > storage_driver = "overlay"
	I0814 17:01:58.514429   50203 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0814 17:01:58.514435   50203 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0814 17:01:58.514442   50203 command_runner.go:130] > storage_option = [
	I0814 17:01:58.514447   50203 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0814 17:01:58.514452   50203 command_runner.go:130] > ]
	I0814 17:01:58.514458   50203 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0814 17:01:58.514465   50203 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0814 17:01:58.514469   50203 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0814 17:01:58.514474   50203 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0814 17:01:58.514480   50203 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0814 17:01:58.514485   50203 command_runner.go:130] > # always happen on a node reboot
	I0814 17:01:58.514490   50203 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0814 17:01:58.514498   50203 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0814 17:01:58.514506   50203 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0814 17:01:58.514510   50203 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0814 17:01:58.514516   50203 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0814 17:01:58.514523   50203 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0814 17:01:58.514532   50203 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0814 17:01:58.514536   50203 command_runner.go:130] > # internal_wipe = true
	I0814 17:01:58.514545   50203 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0814 17:01:58.514555   50203 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0814 17:01:58.514559   50203 command_runner.go:130] > # internal_repair = false
	I0814 17:01:58.514564   50203 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0814 17:01:58.514570   50203 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0814 17:01:58.514577   50203 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0814 17:01:58.514584   50203 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0814 17:01:58.514590   50203 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0814 17:01:58.514597   50203 command_runner.go:130] > [crio.api]
	I0814 17:01:58.514603   50203 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0814 17:01:58.514609   50203 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0814 17:01:58.514615   50203 command_runner.go:130] > # IP address on which the stream server will listen.
	I0814 17:01:58.514621   50203 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0814 17:01:58.514627   50203 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0814 17:01:58.514635   50203 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0814 17:01:58.514641   50203 command_runner.go:130] > # stream_port = "0"
	I0814 17:01:58.514648   50203 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0814 17:01:58.514653   50203 command_runner.go:130] > # stream_enable_tls = false
	I0814 17:01:58.514660   50203 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0814 17:01:58.514668   50203 command_runner.go:130] > # stream_idle_timeout = ""
	I0814 17:01:58.514674   50203 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0814 17:01:58.514684   50203 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0814 17:01:58.514687   50203 command_runner.go:130] > # minutes.
	I0814 17:01:58.514691   50203 command_runner.go:130] > # stream_tls_cert = ""
	I0814 17:01:58.514697   50203 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0814 17:01:58.514705   50203 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0814 17:01:58.514710   50203 command_runner.go:130] > # stream_tls_key = ""
	I0814 17:01:58.514717   50203 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0814 17:01:58.514723   50203 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0814 17:01:58.514749   50203 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0814 17:01:58.514756   50203 command_runner.go:130] > # stream_tls_ca = ""
	I0814 17:01:58.514763   50203 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0814 17:01:58.514767   50203 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0814 17:01:58.514775   50203 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0814 17:01:58.514781   50203 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0814 17:01:58.514786   50203 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0814 17:01:58.514792   50203 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0814 17:01:58.514796   50203 command_runner.go:130] > [crio.runtime]
	I0814 17:01:58.514802   50203 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0814 17:01:58.514809   50203 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0814 17:01:58.514814   50203 command_runner.go:130] > # "nofile=1024:2048"
	I0814 17:01:58.514828   50203 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0814 17:01:58.514835   50203 command_runner.go:130] > # default_ulimits = [
	I0814 17:01:58.514838   50203 command_runner.go:130] > # ]
	I0814 17:01:58.514860   50203 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0814 17:01:58.514867   50203 command_runner.go:130] > # no_pivot = false
	I0814 17:01:58.514873   50203 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0814 17:01:58.514880   50203 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0814 17:01:58.514885   50203 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0814 17:01:58.514891   50203 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0814 17:01:58.514899   50203 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0814 17:01:58.514906   50203 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0814 17:01:58.514913   50203 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0814 17:01:58.514917   50203 command_runner.go:130] > # Cgroup setting for conmon
	I0814 17:01:58.514923   50203 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0814 17:01:58.514929   50203 command_runner.go:130] > conmon_cgroup = "pod"
	I0814 17:01:58.514935   50203 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0814 17:01:58.514942   50203 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0814 17:01:58.514949   50203 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0814 17:01:58.514955   50203 command_runner.go:130] > conmon_env = [
	I0814 17:01:58.514960   50203 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0814 17:01:58.514966   50203 command_runner.go:130] > ]
	I0814 17:01:58.514971   50203 command_runner.go:130] > # Additional environment variables to set for all the
	I0814 17:01:58.514978   50203 command_runner.go:130] > # containers. These are overridden if set in the
	I0814 17:01:58.514983   50203 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0814 17:01:58.514989   50203 command_runner.go:130] > # default_env = [
	I0814 17:01:58.514992   50203 command_runner.go:130] > # ]
	I0814 17:01:58.514997   50203 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0814 17:01:58.515005   50203 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0814 17:01:58.515010   50203 command_runner.go:130] > # selinux = false
	I0814 17:01:58.515016   50203 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0814 17:01:58.515022   50203 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0814 17:01:58.515028   50203 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0814 17:01:58.515032   50203 command_runner.go:130] > # seccomp_profile = ""
	I0814 17:01:58.515039   50203 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0814 17:01:58.515045   50203 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0814 17:01:58.515053   50203 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0814 17:01:58.515057   50203 command_runner.go:130] > # which might increase security.
	I0814 17:01:58.515063   50203 command_runner.go:130] > # This option is currently deprecated,
	I0814 17:01:58.515069   50203 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0814 17:01:58.515076   50203 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0814 17:01:58.515082   50203 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0814 17:01:58.515090   50203 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0814 17:01:58.515096   50203 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0814 17:01:58.515104   50203 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0814 17:01:58.515109   50203 command_runner.go:130] > # This option supports live configuration reload.
	I0814 17:01:58.515116   50203 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0814 17:01:58.515121   50203 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0814 17:01:58.515128   50203 command_runner.go:130] > # the cgroup blockio controller.
	I0814 17:01:58.515132   50203 command_runner.go:130] > # blockio_config_file = ""
	I0814 17:01:58.515139   50203 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0814 17:01:58.515144   50203 command_runner.go:130] > # blockio parameters.
	I0814 17:01:58.515150   50203 command_runner.go:130] > # blockio_reload = false
	I0814 17:01:58.515157   50203 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0814 17:01:58.515163   50203 command_runner.go:130] > # irqbalance daemon.
	I0814 17:01:58.515168   50203 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0814 17:01:58.515185   50203 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0814 17:01:58.515192   50203 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0814 17:01:58.515199   50203 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0814 17:01:58.515205   50203 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0814 17:01:58.515213   50203 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0814 17:01:58.515218   50203 command_runner.go:130] > # This option supports live configuration reload.
	I0814 17:01:58.515224   50203 command_runner.go:130] > # rdt_config_file = ""
	I0814 17:01:58.515229   50203 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0814 17:01:58.515235   50203 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0814 17:01:58.515256   50203 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0814 17:01:58.515262   50203 command_runner.go:130] > # separate_pull_cgroup = ""
	I0814 17:01:58.515269   50203 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0814 17:01:58.515277   50203 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0814 17:01:58.515281   50203 command_runner.go:130] > # will be added.
	I0814 17:01:58.515285   50203 command_runner.go:130] > # default_capabilities = [
	I0814 17:01:58.515290   50203 command_runner.go:130] > # 	"CHOWN",
	I0814 17:01:58.515294   50203 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0814 17:01:58.515298   50203 command_runner.go:130] > # 	"FSETID",
	I0814 17:01:58.515302   50203 command_runner.go:130] > # 	"FOWNER",
	I0814 17:01:58.515305   50203 command_runner.go:130] > # 	"SETGID",
	I0814 17:01:58.515309   50203 command_runner.go:130] > # 	"SETUID",
	I0814 17:01:58.515312   50203 command_runner.go:130] > # 	"SETPCAP",
	I0814 17:01:58.515316   50203 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0814 17:01:58.515319   50203 command_runner.go:130] > # 	"KILL",
	I0814 17:01:58.515337   50203 command_runner.go:130] > # ]
	I0814 17:01:58.515352   50203 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0814 17:01:58.515364   50203 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0814 17:01:58.515369   50203 command_runner.go:130] > # add_inheritable_capabilities = false
	I0814 17:01:58.515376   50203 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0814 17:01:58.515384   50203 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0814 17:01:58.515388   50203 command_runner.go:130] > default_sysctls = [
	I0814 17:01:58.515397   50203 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0814 17:01:58.515402   50203 command_runner.go:130] > ]
	I0814 17:01:58.515407   50203 command_runner.go:130] > # List of devices on the host that a
	I0814 17:01:58.515419   50203 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0814 17:01:58.515425   50203 command_runner.go:130] > # allowed_devices = [
	I0814 17:01:58.515428   50203 command_runner.go:130] > # 	"/dev/fuse",
	I0814 17:01:58.515432   50203 command_runner.go:130] > # ]
	I0814 17:01:58.515436   50203 command_runner.go:130] > # List of additional devices. specified as
	I0814 17:01:58.515443   50203 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0814 17:01:58.515451   50203 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0814 17:01:58.515457   50203 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0814 17:01:58.515463   50203 command_runner.go:130] > # additional_devices = [
	I0814 17:01:58.515466   50203 command_runner.go:130] > # ]
	I0814 17:01:58.515471   50203 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0814 17:01:58.515477   50203 command_runner.go:130] > # cdi_spec_dirs = [
	I0814 17:01:58.515481   50203 command_runner.go:130] > # 	"/etc/cdi",
	I0814 17:01:58.515487   50203 command_runner.go:130] > # 	"/var/run/cdi",
	I0814 17:01:58.515491   50203 command_runner.go:130] > # ]
	I0814 17:01:58.515497   50203 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0814 17:01:58.515505   50203 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0814 17:01:58.515509   50203 command_runner.go:130] > # Defaults to false.
	I0814 17:01:58.515514   50203 command_runner.go:130] > # device_ownership_from_security_context = false
	I0814 17:01:58.515522   50203 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0814 17:01:58.515527   50203 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0814 17:01:58.515531   50203 command_runner.go:130] > # hooks_dir = [
	I0814 17:01:58.515535   50203 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0814 17:01:58.515541   50203 command_runner.go:130] > # ]
	I0814 17:01:58.515547   50203 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0814 17:01:58.515555   50203 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0814 17:01:58.515560   50203 command_runner.go:130] > # its default mounts from the following two files:
	I0814 17:01:58.515565   50203 command_runner.go:130] > #
	I0814 17:01:58.515570   50203 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0814 17:01:58.515579   50203 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0814 17:01:58.515584   50203 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0814 17:01:58.515589   50203 command_runner.go:130] > #
	I0814 17:01:58.515595   50203 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0814 17:01:58.515602   50203 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0814 17:01:58.515610   50203 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0814 17:01:58.515615   50203 command_runner.go:130] > #      only add mounts it finds in this file.
	I0814 17:01:58.515619   50203 command_runner.go:130] > #
	I0814 17:01:58.515623   50203 command_runner.go:130] > # default_mounts_file = ""
	I0814 17:01:58.515630   50203 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0814 17:01:58.515637   50203 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0814 17:01:58.515641   50203 command_runner.go:130] > pids_limit = 1024
	I0814 17:01:58.515647   50203 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0814 17:01:58.515655   50203 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0814 17:01:58.515661   50203 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0814 17:01:58.515671   50203 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0814 17:01:58.515677   50203 command_runner.go:130] > # log_size_max = -1
	I0814 17:01:58.515683   50203 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0814 17:01:58.515690   50203 command_runner.go:130] > # log_to_journald = false
	I0814 17:01:58.515696   50203 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0814 17:01:58.515703   50203 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0814 17:01:58.515707   50203 command_runner.go:130] > # Path to directory for container attach sockets.
	I0814 17:01:58.515714   50203 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0814 17:01:58.515720   50203 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0814 17:01:58.515726   50203 command_runner.go:130] > # bind_mount_prefix = ""
	I0814 17:01:58.515731   50203 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0814 17:01:58.515735   50203 command_runner.go:130] > # read_only = false
	I0814 17:01:58.515742   50203 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0814 17:01:58.515751   50203 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0814 17:01:58.515756   50203 command_runner.go:130] > # live configuration reload.
	I0814 17:01:58.515760   50203 command_runner.go:130] > # log_level = "info"
	I0814 17:01:58.515765   50203 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0814 17:01:58.515772   50203 command_runner.go:130] > # This option supports live configuration reload.
	I0814 17:01:58.515776   50203 command_runner.go:130] > # log_filter = ""
	I0814 17:01:58.515784   50203 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0814 17:01:58.515792   50203 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0814 17:01:58.515798   50203 command_runner.go:130] > # separated by comma.
	I0814 17:01:58.515805   50203 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 17:01:58.515812   50203 command_runner.go:130] > # uid_mappings = ""
	I0814 17:01:58.515818   50203 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0814 17:01:58.515825   50203 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0814 17:01:58.515829   50203 command_runner.go:130] > # separated by comma.
	I0814 17:01:58.515838   50203 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 17:01:58.515841   50203 command_runner.go:130] > # gid_mappings = ""
	I0814 17:01:58.515848   50203 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0814 17:01:58.515856   50203 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0814 17:01:58.515861   50203 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0814 17:01:58.515870   50203 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 17:01:58.515875   50203 command_runner.go:130] > # minimum_mappable_uid = -1
	I0814 17:01:58.515880   50203 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0814 17:01:58.515888   50203 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0814 17:01:58.515893   50203 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0814 17:01:58.515902   50203 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0814 17:01:58.515909   50203 command_runner.go:130] > # minimum_mappable_gid = -1
	I0814 17:01:58.515915   50203 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0814 17:01:58.515923   50203 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0814 17:01:58.515928   50203 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0814 17:01:58.515932   50203 command_runner.go:130] > # ctr_stop_timeout = 30
	I0814 17:01:58.515938   50203 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0814 17:01:58.515946   50203 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0814 17:01:58.515951   50203 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0814 17:01:58.515958   50203 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0814 17:01:58.515963   50203 command_runner.go:130] > drop_infra_ctr = false
	I0814 17:01:58.515970   50203 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0814 17:01:58.515976   50203 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0814 17:01:58.515985   50203 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0814 17:01:58.515990   50203 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0814 17:01:58.515997   50203 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0814 17:01:58.516005   50203 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0814 17:01:58.516011   50203 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0814 17:01:58.516017   50203 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0814 17:01:58.516020   50203 command_runner.go:130] > # shared_cpuset = ""
	I0814 17:01:58.516026   50203 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0814 17:01:58.516031   50203 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0814 17:01:58.516035   50203 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0814 17:01:58.516044   50203 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0814 17:01:58.516049   50203 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0814 17:01:58.516055   50203 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0814 17:01:58.516061   50203 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0814 17:01:58.516067   50203 command_runner.go:130] > # enable_criu_support = false
	I0814 17:01:58.516072   50203 command_runner.go:130] > # Enable/disable the generation of the container,
	I0814 17:01:58.516080   50203 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0814 17:01:58.516086   50203 command_runner.go:130] > # enable_pod_events = false
	I0814 17:01:58.516092   50203 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0814 17:01:58.516100   50203 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0814 17:01:58.516105   50203 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0814 17:01:58.516111   50203 command_runner.go:130] > # default_runtime = "runc"
	I0814 17:01:58.516116   50203 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0814 17:01:58.516123   50203 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0814 17:01:58.516133   50203 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0814 17:01:58.516140   50203 command_runner.go:130] > # creation as a file is not desired either.
	I0814 17:01:58.516148   50203 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0814 17:01:58.516155   50203 command_runner.go:130] > # the hostname is being managed dynamically.
	I0814 17:01:58.516159   50203 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0814 17:01:58.516164   50203 command_runner.go:130] > # ]
	I0814 17:01:58.516170   50203 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0814 17:01:58.516178   50203 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0814 17:01:58.516184   50203 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0814 17:01:58.516191   50203 command_runner.go:130] > # Each entry in the table should follow the format:
	I0814 17:01:58.516194   50203 command_runner.go:130] > #
	I0814 17:01:58.516199   50203 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0814 17:01:58.516205   50203 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0814 17:01:58.516227   50203 command_runner.go:130] > # runtime_type = "oci"
	I0814 17:01:58.516234   50203 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0814 17:01:58.516238   50203 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0814 17:01:58.516245   50203 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0814 17:01:58.516250   50203 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0814 17:01:58.516254   50203 command_runner.go:130] > # monitor_env = []
	I0814 17:01:58.516259   50203 command_runner.go:130] > # privileged_without_host_devices = false
	I0814 17:01:58.516265   50203 command_runner.go:130] > # allowed_annotations = []
	I0814 17:01:58.516270   50203 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0814 17:01:58.516275   50203 command_runner.go:130] > # Where:
	I0814 17:01:58.516281   50203 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0814 17:01:58.516291   50203 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0814 17:01:58.516297   50203 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0814 17:01:58.516305   50203 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0814 17:01:58.516309   50203 command_runner.go:130] > #   in $PATH.
	I0814 17:01:58.516317   50203 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0814 17:01:58.516322   50203 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0814 17:01:58.516330   50203 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0814 17:01:58.516334   50203 command_runner.go:130] > #   state.
	I0814 17:01:58.516342   50203 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0814 17:01:58.516348   50203 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0814 17:01:58.516356   50203 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0814 17:01:58.516362   50203 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0814 17:01:58.516370   50203 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0814 17:01:58.516376   50203 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0814 17:01:58.516383   50203 command_runner.go:130] > #   The currently recognized values are:
	I0814 17:01:58.516389   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0814 17:01:58.516398   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0814 17:01:58.516403   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0814 17:01:58.516410   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0814 17:01:58.516421   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0814 17:01:58.516429   50203 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0814 17:01:58.516436   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0814 17:01:58.516444   50203 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0814 17:01:58.516450   50203 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0814 17:01:58.516458   50203 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0814 17:01:58.516462   50203 command_runner.go:130] > #   deprecated option "conmon".
	I0814 17:01:58.516470   50203 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0814 17:01:58.516475   50203 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0814 17:01:58.516484   50203 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0814 17:01:58.516491   50203 command_runner.go:130] > #   should be moved to the container's cgroup
	I0814 17:01:58.516500   50203 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0814 17:01:58.516505   50203 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0814 17:01:58.516510   50203 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0814 17:01:58.516517   50203 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0814 17:01:58.516521   50203 command_runner.go:130] > #
	I0814 17:01:58.516529   50203 command_runner.go:130] > # Using the seccomp notifier feature:
	I0814 17:01:58.516535   50203 command_runner.go:130] > #
	I0814 17:01:58.516541   50203 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0814 17:01:58.516549   50203 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0814 17:01:58.516552   50203 command_runner.go:130] > #
	I0814 17:01:58.516560   50203 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0814 17:01:58.516566   50203 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0814 17:01:58.516572   50203 command_runner.go:130] > #
	I0814 17:01:58.516578   50203 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0814 17:01:58.516581   50203 command_runner.go:130] > # feature.
	I0814 17:01:58.516584   50203 command_runner.go:130] > #
	I0814 17:01:58.516589   50203 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0814 17:01:58.516597   50203 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0814 17:01:58.516603   50203 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0814 17:01:58.516609   50203 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0814 17:01:58.516614   50203 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0814 17:01:58.516617   50203 command_runner.go:130] > #
	I0814 17:01:58.516622   50203 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0814 17:01:58.516628   50203 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0814 17:01:58.516631   50203 command_runner.go:130] > #
	I0814 17:01:58.516636   50203 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0814 17:01:58.516641   50203 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0814 17:01:58.516645   50203 command_runner.go:130] > #
	I0814 17:01:58.516650   50203 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0814 17:01:58.516663   50203 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0814 17:01:58.516666   50203 command_runner.go:130] > # limitation.
	I0814 17:01:58.516671   50203 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0814 17:01:58.516678   50203 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0814 17:01:58.516681   50203 command_runner.go:130] > runtime_type = "oci"
	I0814 17:01:58.516685   50203 command_runner.go:130] > runtime_root = "/run/runc"
	I0814 17:01:58.516689   50203 command_runner.go:130] > runtime_config_path = ""
	I0814 17:01:58.516694   50203 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0814 17:01:58.516700   50203 command_runner.go:130] > monitor_cgroup = "pod"
	I0814 17:01:58.516704   50203 command_runner.go:130] > monitor_exec_cgroup = ""
	I0814 17:01:58.516710   50203 command_runner.go:130] > monitor_env = [
	I0814 17:01:58.516715   50203 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0814 17:01:58.516721   50203 command_runner.go:130] > ]
	I0814 17:01:58.516726   50203 command_runner.go:130] > privileged_without_host_devices = false
	I0814 17:01:58.516732   50203 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0814 17:01:58.516737   50203 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0814 17:01:58.516744   50203 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0814 17:01:58.516753   50203 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0814 17:01:58.516760   50203 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0814 17:01:58.516767   50203 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0814 17:01:58.516776   50203 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0814 17:01:58.516785   50203 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0814 17:01:58.516791   50203 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0814 17:01:58.516798   50203 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0814 17:01:58.516801   50203 command_runner.go:130] > # Example:
	I0814 17:01:58.516805   50203 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0814 17:01:58.516809   50203 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0814 17:01:58.516814   50203 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0814 17:01:58.516818   50203 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0814 17:01:58.516822   50203 command_runner.go:130] > # cpuset = 0
	I0814 17:01:58.516825   50203 command_runner.go:130] > # cpushares = "0-1"
	I0814 17:01:58.516828   50203 command_runner.go:130] > # Where:
	I0814 17:01:58.516833   50203 command_runner.go:130] > # The workload name is workload-type.
	I0814 17:01:58.516839   50203 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0814 17:01:58.516844   50203 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0814 17:01:58.516849   50203 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0814 17:01:58.516856   50203 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0814 17:01:58.516861   50203 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0814 17:01:58.516865   50203 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0814 17:01:58.516871   50203 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0814 17:01:58.516875   50203 command_runner.go:130] > # Default value is set to true
	I0814 17:01:58.516879   50203 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0814 17:01:58.516884   50203 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0814 17:01:58.516888   50203 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0814 17:01:58.516892   50203 command_runner.go:130] > # Default value is set to 'false'
	I0814 17:01:58.516896   50203 command_runner.go:130] > # disable_hostport_mapping = false
	I0814 17:01:58.516902   50203 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0814 17:01:58.516905   50203 command_runner.go:130] > #
	I0814 17:01:58.516919   50203 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0814 17:01:58.516925   50203 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0814 17:01:58.516931   50203 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0814 17:01:58.516937   50203 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0814 17:01:58.516942   50203 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0814 17:01:58.516945   50203 command_runner.go:130] > [crio.image]
	I0814 17:01:58.516950   50203 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0814 17:01:58.516954   50203 command_runner.go:130] > # default_transport = "docker://"
	I0814 17:01:58.516960   50203 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0814 17:01:58.516966   50203 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0814 17:01:58.516969   50203 command_runner.go:130] > # global_auth_file = ""
	I0814 17:01:58.516974   50203 command_runner.go:130] > # The image used to instantiate infra containers.
	I0814 17:01:58.516981   50203 command_runner.go:130] > # This option supports live configuration reload.
	I0814 17:01:58.516985   50203 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0814 17:01:58.516991   50203 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0814 17:01:58.516997   50203 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0814 17:01:58.517002   50203 command_runner.go:130] > # This option supports live configuration reload.
	I0814 17:01:58.517009   50203 command_runner.go:130] > # pause_image_auth_file = ""
	I0814 17:01:58.517015   50203 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0814 17:01:58.517023   50203 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0814 17:01:58.517028   50203 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0814 17:01:58.517036   50203 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0814 17:01:58.517040   50203 command_runner.go:130] > # pause_command = "/pause"
	I0814 17:01:58.517046   50203 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0814 17:01:58.517054   50203 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0814 17:01:58.517060   50203 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0814 17:01:58.517068   50203 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0814 17:01:58.517076   50203 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0814 17:01:58.517082   50203 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0814 17:01:58.517088   50203 command_runner.go:130] > # pinned_images = [
	I0814 17:01:58.517091   50203 command_runner.go:130] > # ]
	I0814 17:01:58.517097   50203 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0814 17:01:58.517105   50203 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0814 17:01:58.517111   50203 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0814 17:01:58.517119   50203 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0814 17:01:58.517124   50203 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0814 17:01:58.517131   50203 command_runner.go:130] > # signature_policy = ""
	I0814 17:01:58.517137   50203 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0814 17:01:58.517146   50203 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0814 17:01:58.517152   50203 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0814 17:01:58.517160   50203 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0814 17:01:58.517165   50203 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0814 17:01:58.517170   50203 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0814 17:01:58.517178   50203 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0814 17:01:58.517184   50203 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0814 17:01:58.517190   50203 command_runner.go:130] > # changing them here.
	I0814 17:01:58.517194   50203 command_runner.go:130] > # insecure_registries = [
	I0814 17:01:58.517197   50203 command_runner.go:130] > # ]
	I0814 17:01:58.517203   50203 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0814 17:01:58.517209   50203 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0814 17:01:58.517213   50203 command_runner.go:130] > # image_volumes = "mkdir"
	I0814 17:01:58.517220   50203 command_runner.go:130] > # Temporary directory to use for storing big files
	I0814 17:01:58.517224   50203 command_runner.go:130] > # big_files_temporary_dir = ""
	I0814 17:01:58.517232   50203 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0814 17:01:58.517236   50203 command_runner.go:130] > # CNI plugins.
	I0814 17:01:58.517242   50203 command_runner.go:130] > [crio.network]
	I0814 17:01:58.517248   50203 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0814 17:01:58.517255   50203 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0814 17:01:58.517259   50203 command_runner.go:130] > # cni_default_network = ""
	I0814 17:01:58.517265   50203 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0814 17:01:58.517270   50203 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0814 17:01:58.517275   50203 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0814 17:01:58.517280   50203 command_runner.go:130] > # plugin_dirs = [
	I0814 17:01:58.517284   50203 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0814 17:01:58.517288   50203 command_runner.go:130] > # ]
	I0814 17:01:58.517295   50203 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0814 17:01:58.517299   50203 command_runner.go:130] > [crio.metrics]
	I0814 17:01:58.517306   50203 command_runner.go:130] > # Globally enable or disable metrics support.
	I0814 17:01:58.517310   50203 command_runner.go:130] > enable_metrics = true
	I0814 17:01:58.517314   50203 command_runner.go:130] > # Specify enabled metrics collectors.
	I0814 17:01:58.517321   50203 command_runner.go:130] > # Per default all metrics are enabled.
	I0814 17:01:58.517326   50203 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0814 17:01:58.517335   50203 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0814 17:01:58.517340   50203 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0814 17:01:58.517347   50203 command_runner.go:130] > # metrics_collectors = [
	I0814 17:01:58.517351   50203 command_runner.go:130] > # 	"operations",
	I0814 17:01:58.517357   50203 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0814 17:01:58.517365   50203 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0814 17:01:58.517369   50203 command_runner.go:130] > # 	"operations_errors",
	I0814 17:01:58.517373   50203 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0814 17:01:58.517377   50203 command_runner.go:130] > # 	"image_pulls_by_name",
	I0814 17:01:58.517381   50203 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0814 17:01:58.517385   50203 command_runner.go:130] > # 	"image_pulls_failures",
	I0814 17:01:58.517389   50203 command_runner.go:130] > # 	"image_pulls_successes",
	I0814 17:01:58.517393   50203 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0814 17:01:58.517396   50203 command_runner.go:130] > # 	"image_layer_reuse",
	I0814 17:01:58.517401   50203 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0814 17:01:58.517407   50203 command_runner.go:130] > # 	"containers_oom_total",
	I0814 17:01:58.517411   50203 command_runner.go:130] > # 	"containers_oom",
	I0814 17:01:58.517419   50203 command_runner.go:130] > # 	"processes_defunct",
	I0814 17:01:58.517423   50203 command_runner.go:130] > # 	"operations_total",
	I0814 17:01:58.517427   50203 command_runner.go:130] > # 	"operations_latency_seconds",
	I0814 17:01:58.517432   50203 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0814 17:01:58.517437   50203 command_runner.go:130] > # 	"operations_errors_total",
	I0814 17:01:58.517442   50203 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0814 17:01:58.517449   50203 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0814 17:01:58.517452   50203 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0814 17:01:58.517457   50203 command_runner.go:130] > # 	"image_pulls_success_total",
	I0814 17:01:58.517461   50203 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0814 17:01:58.517468   50203 command_runner.go:130] > # 	"containers_oom_count_total",
	I0814 17:01:58.517473   50203 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0814 17:01:58.517477   50203 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0814 17:01:58.517480   50203 command_runner.go:130] > # ]
	I0814 17:01:58.517485   50203 command_runner.go:130] > # The port on which the metrics server will listen.
	I0814 17:01:58.517491   50203 command_runner.go:130] > # metrics_port = 9090
	I0814 17:01:58.517496   50203 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0814 17:01:58.517502   50203 command_runner.go:130] > # metrics_socket = ""
	I0814 17:01:58.517508   50203 command_runner.go:130] > # The certificate for the secure metrics server.
	I0814 17:01:58.517519   50203 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0814 17:01:58.517527   50203 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0814 17:01:58.517532   50203 command_runner.go:130] > # certificate on any modification event.
	I0814 17:01:58.517537   50203 command_runner.go:130] > # metrics_cert = ""
	I0814 17:01:58.517542   50203 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0814 17:01:58.517549   50203 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0814 17:01:58.517553   50203 command_runner.go:130] > # metrics_key = ""
	I0814 17:01:58.517558   50203 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0814 17:01:58.517564   50203 command_runner.go:130] > [crio.tracing]
	I0814 17:01:58.517569   50203 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0814 17:01:58.517573   50203 command_runner.go:130] > # enable_tracing = false
	I0814 17:01:58.517578   50203 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0814 17:01:58.517584   50203 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0814 17:01:58.517591   50203 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0814 17:01:58.517598   50203 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0814 17:01:58.517602   50203 command_runner.go:130] > # CRI-O NRI configuration.
	I0814 17:01:58.517605   50203 command_runner.go:130] > [crio.nri]
	I0814 17:01:58.517610   50203 command_runner.go:130] > # Globally enable or disable NRI.
	I0814 17:01:58.517616   50203 command_runner.go:130] > # enable_nri = false
	I0814 17:01:58.517620   50203 command_runner.go:130] > # NRI socket to listen on.
	I0814 17:01:58.517627   50203 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0814 17:01:58.517631   50203 command_runner.go:130] > # NRI plugin directory to use.
	I0814 17:01:58.517638   50203 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0814 17:01:58.517642   50203 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0814 17:01:58.517649   50203 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0814 17:01:58.517654   50203 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0814 17:01:58.517659   50203 command_runner.go:130] > # nri_disable_connections = false
	I0814 17:01:58.517666   50203 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0814 17:01:58.517670   50203 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0814 17:01:58.517678   50203 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0814 17:01:58.517682   50203 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0814 17:01:58.517690   50203 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0814 17:01:58.517694   50203 command_runner.go:130] > [crio.stats]
	I0814 17:01:58.517700   50203 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0814 17:01:58.517708   50203 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0814 17:01:58.517712   50203 command_runner.go:130] > # stats_collection_period = 0
	I0814 17:01:58.517841   50203 cni.go:84] Creating CNI manager for ""
	I0814 17:01:58.517854   50203 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0814 17:01:58.517864   50203 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:01:58.517889   50203 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.36 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-986999 NodeName:multinode-986999 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:01:58.518005   50203 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-986999"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:01:58.518066   50203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:01:58.527717   50203 command_runner.go:130] > kubeadm
	I0814 17:01:58.527736   50203 command_runner.go:130] > kubectl
	I0814 17:01:58.527739   50203 command_runner.go:130] > kubelet
	I0814 17:01:58.527846   50203 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:01:58.527899   50203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:01:58.536772   50203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0814 17:01:58.552474   50203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:01:58.570073   50203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0814 17:01:58.587255   50203 ssh_runner.go:195] Run: grep 192.168.39.36	control-plane.minikube.internal$ /etc/hosts
	I0814 17:01:58.590952   50203 command_runner.go:130] > 192.168.39.36	control-plane.minikube.internal
	I0814 17:01:58.591020   50203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:01:58.740761   50203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:01:58.754839   50203 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999 for IP: 192.168.39.36
	I0814 17:01:58.754873   50203 certs.go:194] generating shared ca certs ...
	I0814 17:01:58.754897   50203 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:01:58.755063   50203 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:01:58.755118   50203 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:01:58.755132   50203 certs.go:256] generating profile certs ...
	I0814 17:01:58.755239   50203 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/client.key
	I0814 17:01:58.755313   50203 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/apiserver.key.fc6ade07
	I0814 17:01:58.755397   50203 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/proxy-client.key
	I0814 17:01:58.755412   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0814 17:01:58.755435   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0814 17:01:58.755457   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0814 17:01:58.755479   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0814 17:01:58.755498   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0814 17:01:58.755519   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0814 17:01:58.755544   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0814 17:01:58.755591   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0814 17:01:58.755721   50203 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:01:58.755817   50203 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:01:58.755832   50203 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:01:58.755875   50203 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:01:58.755909   50203 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:01:58.755940   50203 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:01:58.756000   50203 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:01:58.756049   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:01:58.756071   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem -> /usr/share/ca-certificates/21177.pem
	I0814 17:01:58.756091   50203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> /usr/share/ca-certificates/211772.pem
	I0814 17:01:58.756669   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:01:58.780635   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:01:58.803117   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:01:58.826051   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:01:58.847780   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 17:01:58.870900   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 17:01:58.893304   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:01:58.915703   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/multinode-986999/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:01:58.939624   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:01:58.962946   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:01:58.986989   50203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:01:59.009969   50203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:01:59.025705   50203 ssh_runner.go:195] Run: openssl version
	I0814 17:01:59.030993   50203 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0814 17:01:59.031080   50203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:01:59.041216   50203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:01:59.045139   50203 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:01:59.045217   50203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:01:59.045280   50203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:01:59.050700   50203 command_runner.go:130] > 3ec20f2e
	I0814 17:01:59.050778   50203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:01:59.060129   50203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:01:59.070552   50203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:01:59.074745   50203 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:01:59.074776   50203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:01:59.074814   50203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:01:59.080099   50203 command_runner.go:130] > b5213941
	I0814 17:01:59.080165   50203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:01:59.088654   50203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:01:59.099213   50203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:01:59.103377   50203 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:01:59.103411   50203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:01:59.103449   50203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:01:59.108876   50203 command_runner.go:130] > 51391683
	I0814 17:01:59.108948   50203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:01:59.117495   50203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:01:59.121624   50203 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:01:59.121643   50203 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0814 17:01:59.121649   50203 command_runner.go:130] > Device: 253,1	Inode: 7338518     Links: 1
	I0814 17:01:59.121657   50203 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0814 17:01:59.121666   50203 command_runner.go:130] > Access: 2024-08-14 16:54:48.371171037 +0000
	I0814 17:01:59.121674   50203 command_runner.go:130] > Modify: 2024-08-14 16:54:48.371171037 +0000
	I0814 17:01:59.121682   50203 command_runner.go:130] > Change: 2024-08-14 16:54:48.371171037 +0000
	I0814 17:01:59.121690   50203 command_runner.go:130] >  Birth: 2024-08-14 16:54:48.371171037 +0000
	I0814 17:01:59.121738   50203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:01:59.126721   50203 command_runner.go:130] > Certificate will not expire
	I0814 17:01:59.126857   50203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:01:59.131834   50203 command_runner.go:130] > Certificate will not expire
	I0814 17:01:59.131992   50203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:01:59.136994   50203 command_runner.go:130] > Certificate will not expire
	I0814 17:01:59.137054   50203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:01:59.142234   50203 command_runner.go:130] > Certificate will not expire
	I0814 17:01:59.142298   50203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:01:59.147295   50203 command_runner.go:130] > Certificate will not expire
	I0814 17:01:59.147352   50203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:01:59.152289   50203 command_runner.go:130] > Certificate will not expire
	I0814 17:01:59.152418   50203 kubeadm.go:392] StartCluster: {Name:multinode-986999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-986999 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.36 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.110 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:01:59.152545   50203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:01:59.152601   50203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:01:59.188529   50203 command_runner.go:130] > 477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6
	I0814 17:01:59.188562   50203 command_runner.go:130] > 53655be94b95a012b16d4cc8addb5eb89b496fee03e6df4f6a08ce81e2d465e1
	I0814 17:01:59.188573   50203 command_runner.go:130] > 7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15
	I0814 17:01:59.188583   50203 command_runner.go:130] > 065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14
	I0814 17:01:59.188592   50203 command_runner.go:130] > 8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c
	I0814 17:01:59.188602   50203 command_runner.go:130] > 8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6
	I0814 17:01:59.188612   50203 command_runner.go:130] > 6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391
	I0814 17:01:59.188627   50203 command_runner.go:130] > 89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b
	I0814 17:01:59.188656   50203 cri.go:89] found id: "477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6"
	I0814 17:01:59.188668   50203 cri.go:89] found id: "53655be94b95a012b16d4cc8addb5eb89b496fee03e6df4f6a08ce81e2d465e1"
	I0814 17:01:59.188679   50203 cri.go:89] found id: "7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15"
	I0814 17:01:59.188685   50203 cri.go:89] found id: "065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14"
	I0814 17:01:59.188697   50203 cri.go:89] found id: "8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c"
	I0814 17:01:59.188707   50203 cri.go:89] found id: "8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6"
	I0814 17:01:59.188712   50203 cri.go:89] found id: "6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391"
	I0814 17:01:59.188722   50203 cri.go:89] found id: "89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b"
	I0814 17:01:59.188727   50203 cri.go:89] found id: ""
	I0814 17:01:59.188786   50203 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.440841789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655166440818308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5d60e4a-ac92-44f9-9799-0ea6ad418326 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.441722922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d34c366-7f7c-45f9-aaf3-ee43c724f314 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.441775610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d34c366-7f7c-45f9-aaf3-ee43c724f314 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.442184001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9efaf008e647ffcb5f0c423a583a70e502d0ea59692e641ee7de27fa83bb1937,PodSandboxId:7088c953f9919fc941dea99184e30e15de825db4abc05fe9d5144e49b592c2fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723654958637851468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b00eff2e1f5a6ebafac3003a2f80b57798117d69a2cb39aab343f964cace12,PodSandboxId:ce87834f1ac6dd64242c171bdb344ac70587e5f69a887a77dccd74c1f20c0ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723654925142388498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093e81907d400a0e8ad10bcf1345d2cda5c5998f3d2e270183919eeed79d16c9,PodSandboxId:88e6d7a45fe69132bbb6e9f72e6ce97524fce7ae3a02563652e50328288e573e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723654925090206101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e4b77fe8c4c74a9ab92cafdf2ebee61958c4f16d8258caf39d207a7f149da3,PodSandboxId:f76570df814da4afd1a258d16091a2faffe2f4b87159cbb7a2c6d79fbd15d97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723654924910925811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e543773f8b925101af65c0c17102fe3ac7a686565faf3adc98871a29fec93f7,PodSandboxId:e005cf5ff5a20e92b32be25934087564b0c3836e35fda89e3e62ff1ada53f170,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723654924967863633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9a993eb9457bbb830323abdb835c9e4cc6ee50aed085f14af5c2228577a473,PodSandboxId:1957784acd36be388d7d7b812461cf0ed476328aceea4a7842966e39fe0116e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723654921092802979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018e5ee7d09971a63b0a8f3373f4295514885455f4d14e303d0475276c613f1,PodSandboxId:653a47cf7bd0ad47bcef95ff44bb427854e43a9237411b88c1249e95c65eed46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723654921070475613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca92dab795508a7cd6103305623d25d2fffaa671df4ba15094a97c1296844947,PodSandboxId:d8dddb3cbe2ea008f8f24f5bcc3a457b2b36a4b3777a1f369d40e14c82862570,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723654921048073623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1112d232a855de91079496d16db1b2dad08932f18afaf02e62ccf6f32bd12429,PodSandboxId:4f9c1cc51cafc809884b3a0fb23c9912e32fa5ac54a03bb81004df8194aad7ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723654920994237628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e440bc0c9c2cfd95e8b723799d7c57c007aa08237a242b3763cc25c6b932245,PodSandboxId:c17ef5766c346daf8345ef8070bec4b9bef4af264b2342f41616055d301ea79f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723654604787376502,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6,PodSandboxId:a9ffa8acdb931a869f922af0c28d767f7b32dffb9e7d75a86c71f9c36d98d10c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723654520060030522,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53655be94b95a012b16d4cc8addb5eb89b496fee03e6df4f6a08ce81e2d465e1,PodSandboxId:5ae877f7722e790345f8a381cb713300c946bc1753f165cd9443f5762c16d072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723654519174701404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15,PodSandboxId:be28d4077d679139e5e8a317aa2743d167625bcd899bd1d700dce6836d9511d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723654507544227848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14,PodSandboxId:6c7ad039d313b6500b38f08f0c5ea577054a1b26eb05382f3f9d240537305a2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723654504566635135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391,PodSandboxId:6f263cd667e0264183be2e699936fcbbd81efbf53eec3f0092b968a88a38d413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723654492878509220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c,PodSandboxId:e9bd2d388e99bcd986cb8e43291b44970815f560362facc95eda8d6aa07e789c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723654492915103545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6,PodSandboxId:a9d0f10c7e34745c0d0d54694b2c4b0eeeb9d45d4dec3b0c4bcfe0488683a919,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723654492879186055,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b,PodSandboxId:750456da3a0064edaba7def836de6b47d1e98aead0e72e70e090923dcb13183b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723654492837085799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d34c366-7f7c-45f9-aaf3-ee43c724f314 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.485110512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55147685-4995-4f05-8a3f-923f06750b18 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.485223662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55147685-4995-4f05-8a3f-923f06750b18 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.490927085Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79073cbd-9267-41f1-826c-3c34811f3f3d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.491392907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655166491364337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79073cbd-9267-41f1-826c-3c34811f3f3d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.492964263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4931bb28-3f9c-48b8-b2a5-2c44d93975c5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.493039473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4931bb28-3f9c-48b8-b2a5-2c44d93975c5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.493416074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9efaf008e647ffcb5f0c423a583a70e502d0ea59692e641ee7de27fa83bb1937,PodSandboxId:7088c953f9919fc941dea99184e30e15de825db4abc05fe9d5144e49b592c2fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723654958637851468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b00eff2e1f5a6ebafac3003a2f80b57798117d69a2cb39aab343f964cace12,PodSandboxId:ce87834f1ac6dd64242c171bdb344ac70587e5f69a887a77dccd74c1f20c0ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723654925142388498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093e81907d400a0e8ad10bcf1345d2cda5c5998f3d2e270183919eeed79d16c9,PodSandboxId:88e6d7a45fe69132bbb6e9f72e6ce97524fce7ae3a02563652e50328288e573e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723654925090206101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e4b77fe8c4c74a9ab92cafdf2ebee61958c4f16d8258caf39d207a7f149da3,PodSandboxId:f76570df814da4afd1a258d16091a2faffe2f4b87159cbb7a2c6d79fbd15d97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723654924910925811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e543773f8b925101af65c0c17102fe3ac7a686565faf3adc98871a29fec93f7,PodSandboxId:e005cf5ff5a20e92b32be25934087564b0c3836e35fda89e3e62ff1ada53f170,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723654924967863633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9a993eb9457bbb830323abdb835c9e4cc6ee50aed085f14af5c2228577a473,PodSandboxId:1957784acd36be388d7d7b812461cf0ed476328aceea4a7842966e39fe0116e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723654921092802979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018e5ee7d09971a63b0a8f3373f4295514885455f4d14e303d0475276c613f1,PodSandboxId:653a47cf7bd0ad47bcef95ff44bb427854e43a9237411b88c1249e95c65eed46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723654921070475613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca92dab795508a7cd6103305623d25d2fffaa671df4ba15094a97c1296844947,PodSandboxId:d8dddb3cbe2ea008f8f24f5bcc3a457b2b36a4b3777a1f369d40e14c82862570,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723654921048073623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1112d232a855de91079496d16db1b2dad08932f18afaf02e62ccf6f32bd12429,PodSandboxId:4f9c1cc51cafc809884b3a0fb23c9912e32fa5ac54a03bb81004df8194aad7ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723654920994237628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e440bc0c9c2cfd95e8b723799d7c57c007aa08237a242b3763cc25c6b932245,PodSandboxId:c17ef5766c346daf8345ef8070bec4b9bef4af264b2342f41616055d301ea79f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723654604787376502,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6,PodSandboxId:a9ffa8acdb931a869f922af0c28d767f7b32dffb9e7d75a86c71f9c36d98d10c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723654520060030522,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53655be94b95a012b16d4cc8addb5eb89b496fee03e6df4f6a08ce81e2d465e1,PodSandboxId:5ae877f7722e790345f8a381cb713300c946bc1753f165cd9443f5762c16d072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723654519174701404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15,PodSandboxId:be28d4077d679139e5e8a317aa2743d167625bcd899bd1d700dce6836d9511d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723654507544227848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14,PodSandboxId:6c7ad039d313b6500b38f08f0c5ea577054a1b26eb05382f3f9d240537305a2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723654504566635135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391,PodSandboxId:6f263cd667e0264183be2e699936fcbbd81efbf53eec3f0092b968a88a38d413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723654492878509220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c,PodSandboxId:e9bd2d388e99bcd986cb8e43291b44970815f560362facc95eda8d6aa07e789c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723654492915103545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6,PodSandboxId:a9d0f10c7e34745c0d0d54694b2c4b0eeeb9d45d4dec3b0c4bcfe0488683a919,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723654492879186055,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b,PodSandboxId:750456da3a0064edaba7def836de6b47d1e98aead0e72e70e090923dcb13183b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723654492837085799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4931bb28-3f9c-48b8-b2a5-2c44d93975c5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.532292213Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a02ff207-7942-46f6-b10b-d7684bf5c8ce name=/runtime.v1.RuntimeService/Version
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.532383670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a02ff207-7942-46f6-b10b-d7684bf5c8ce name=/runtime.v1.RuntimeService/Version
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.533721213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2009484b-8182-4fa3-b430-acabd82db46f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.534204675Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655166534180595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2009484b-8182-4fa3-b430-acabd82db46f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.534672854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55615236-7a88-4d03-a244-14b4cc3ce33b name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.534740347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55615236-7a88-4d03-a244-14b4cc3ce33b name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.535136582Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9efaf008e647ffcb5f0c423a583a70e502d0ea59692e641ee7de27fa83bb1937,PodSandboxId:7088c953f9919fc941dea99184e30e15de825db4abc05fe9d5144e49b592c2fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723654958637851468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b00eff2e1f5a6ebafac3003a2f80b57798117d69a2cb39aab343f964cace12,PodSandboxId:ce87834f1ac6dd64242c171bdb344ac70587e5f69a887a77dccd74c1f20c0ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723654925142388498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093e81907d400a0e8ad10bcf1345d2cda5c5998f3d2e270183919eeed79d16c9,PodSandboxId:88e6d7a45fe69132bbb6e9f72e6ce97524fce7ae3a02563652e50328288e573e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723654925090206101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e4b77fe8c4c74a9ab92cafdf2ebee61958c4f16d8258caf39d207a7f149da3,PodSandboxId:f76570df814da4afd1a258d16091a2faffe2f4b87159cbb7a2c6d79fbd15d97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723654924910925811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e543773f8b925101af65c0c17102fe3ac7a686565faf3adc98871a29fec93f7,PodSandboxId:e005cf5ff5a20e92b32be25934087564b0c3836e35fda89e3e62ff1ada53f170,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723654924967863633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9a993eb9457bbb830323abdb835c9e4cc6ee50aed085f14af5c2228577a473,PodSandboxId:1957784acd36be388d7d7b812461cf0ed476328aceea4a7842966e39fe0116e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723654921092802979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018e5ee7d09971a63b0a8f3373f4295514885455f4d14e303d0475276c613f1,PodSandboxId:653a47cf7bd0ad47bcef95ff44bb427854e43a9237411b88c1249e95c65eed46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723654921070475613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca92dab795508a7cd6103305623d25d2fffaa671df4ba15094a97c1296844947,PodSandboxId:d8dddb3cbe2ea008f8f24f5bcc3a457b2b36a4b3777a1f369d40e14c82862570,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723654921048073623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1112d232a855de91079496d16db1b2dad08932f18afaf02e62ccf6f32bd12429,PodSandboxId:4f9c1cc51cafc809884b3a0fb23c9912e32fa5ac54a03bb81004df8194aad7ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723654920994237628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e440bc0c9c2cfd95e8b723799d7c57c007aa08237a242b3763cc25c6b932245,PodSandboxId:c17ef5766c346daf8345ef8070bec4b9bef4af264b2342f41616055d301ea79f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723654604787376502,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6,PodSandboxId:a9ffa8acdb931a869f922af0c28d767f7b32dffb9e7d75a86c71f9c36d98d10c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723654520060030522,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53655be94b95a012b16d4cc8addb5eb89b496fee03e6df4f6a08ce81e2d465e1,PodSandboxId:5ae877f7722e790345f8a381cb713300c946bc1753f165cd9443f5762c16d072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723654519174701404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15,PodSandboxId:be28d4077d679139e5e8a317aa2743d167625bcd899bd1d700dce6836d9511d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723654507544227848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14,PodSandboxId:6c7ad039d313b6500b38f08f0c5ea577054a1b26eb05382f3f9d240537305a2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723654504566635135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391,PodSandboxId:6f263cd667e0264183be2e699936fcbbd81efbf53eec3f0092b968a88a38d413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723654492878509220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c,PodSandboxId:e9bd2d388e99bcd986cb8e43291b44970815f560362facc95eda8d6aa07e789c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723654492915103545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6,PodSandboxId:a9d0f10c7e34745c0d0d54694b2c4b0eeeb9d45d4dec3b0c4bcfe0488683a919,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723654492879186055,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b,PodSandboxId:750456da3a0064edaba7def836de6b47d1e98aead0e72e70e090923dcb13183b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723654492837085799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55615236-7a88-4d03-a244-14b4cc3ce33b name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.574510138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=33d0ed91-500f-43f8-b117-295c4901f1f8 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.574642171Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=33d0ed91-500f-43f8-b117-295c4901f1f8 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.576119997Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ae08dc4-4b67-4a61-b655-d295c826a766 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.576542986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655166576518601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ae08dc4-4b67-4a61-b655-d295c826a766 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.577105342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22c2f222-fd4f-42cb-9f29-03cd34fba765 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.577208217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22c2f222-fd4f-42cb-9f29-03cd34fba765 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:06:06 multinode-986999 crio[2813]: time="2024-08-14 17:06:06.577557795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9efaf008e647ffcb5f0c423a583a70e502d0ea59692e641ee7de27fa83bb1937,PodSandboxId:7088c953f9919fc941dea99184e30e15de825db4abc05fe9d5144e49b592c2fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723654958637851468,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3b00eff2e1f5a6ebafac3003a2f80b57798117d69a2cb39aab343f964cace12,PodSandboxId:ce87834f1ac6dd64242c171bdb344ac70587e5f69a887a77dccd74c1f20c0ae1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723654925142388498,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093e81907d400a0e8ad10bcf1345d2cda5c5998f3d2e270183919eeed79d16c9,PodSandboxId:88e6d7a45fe69132bbb6e9f72e6ce97524fce7ae3a02563652e50328288e573e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723654925090206101,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e4b77fe8c4c74a9ab92cafdf2ebee61958c4f16d8258caf39d207a7f149da3,PodSandboxId:f76570df814da4afd1a258d16091a2faffe2f4b87159cbb7a2c6d79fbd15d97a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723654924910925811,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e543773f8b925101af65c0c17102fe3ac7a686565faf3adc98871a29fec93f7,PodSandboxId:e005cf5ff5a20e92b32be25934087564b0c3836e35fda89e3e62ff1ada53f170,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723654924967863633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b9a993eb9457bbb830323abdb835c9e4cc6ee50aed085f14af5c2228577a473,PodSandboxId:1957784acd36be388d7d7b812461cf0ed476328aceea4a7842966e39fe0116e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723654921092802979,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map[string]
string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a018e5ee7d09971a63b0a8f3373f4295514885455f4d14e303d0475276c613f1,PodSandboxId:653a47cf7bd0ad47bcef95ff44bb427854e43a9237411b88c1249e95c65eed46,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723654921070475613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]string{io.kube
rnetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca92dab795508a7cd6103305623d25d2fffaa671df4ba15094a97c1296844947,PodSandboxId:d8dddb3cbe2ea008f8f24f5bcc3a457b2b36a4b3777a1f369d40e14c82862570,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723654921048073623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1112d232a855de91079496d16db1b2dad08932f18afaf02e62ccf6f32bd12429,PodSandboxId:4f9c1cc51cafc809884b3a0fb23c9912e32fa5ac54a03bb81004df8194aad7ed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723654920994237628,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e440bc0c9c2cfd95e8b723799d7c57c007aa08237a242b3763cc25c6b932245,PodSandboxId:c17ef5766c346daf8345ef8070bec4b9bef4af264b2342f41616055d301ea79f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723654604787376502,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-2skwv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3ded42a9-8784-4fc3-b9a7-a7e3f092ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6,PodSandboxId:a9ffa8acdb931a869f922af0c28d767f7b32dffb9e7d75a86c71f9c36d98d10c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723654520060030522,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-sxtq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9640da3-53c8-4aba-a906-b99c130fe732,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53655be94b95a012b16d4cc8addb5eb89b496fee03e6df4f6a08ce81e2d465e1,PodSandboxId:5ae877f7722e790345f8a381cb713300c946bc1753f165cd9443f5762c16d072,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723654519174701404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 1e20e430-5890-4b22-8faa-e2397e0fbf51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15,PodSandboxId:be28d4077d679139e5e8a317aa2743d167625bcd899bd1d700dce6836d9511d9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723654507544227848,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9v2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: ff4cd8c0-3315-4d15-ab4d-20bd78455f37,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14,PodSandboxId:6c7ad039d313b6500b38f08f0c5ea577054a1b26eb05382f3f9d240537305a2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723654504566635135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l2f8r,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4eff4cf1-c80c-41d4-a4eb-84de71118384,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391,PodSandboxId:6f263cd667e0264183be2e699936fcbbd81efbf53eec3f0092b968a88a38d413,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723654492878509220,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e
32245e4b0d179137032fe925878038,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c,PodSandboxId:e9bd2d388e99bcd986cb8e43291b44970815f560362facc95eda8d6aa07e789c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723654492915103545,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1669ef469a77149c840a7c14d3c857,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6,PodSandboxId:a9d0f10c7e34745c0d0d54694b2c4b0eeeb9d45d4dec3b0c4bcfe0488683a919,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723654492879186055,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d92416aa7a630dfacbcf4e86e8e7119c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b,PodSandboxId:750456da3a0064edaba7def836de6b47d1e98aead0e72e70e090923dcb13183b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723654492837085799,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-986999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31e345ae3363b4a7b3f3348f66460c50,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22c2f222-fd4f-42cb-9f29-03cd34fba765 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9efaf008e647f       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   7088c953f9919       busybox-7dff88458-2skwv
	b3b00eff2e1f5       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   ce87834f1ac6d       kindnet-pd9v2
	093e81907d400       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   88e6d7a45fe69       coredns-6f6b679f8f-sxtq9
	2e543773f8b92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   e005cf5ff5a20       storage-provisioner
	c8e4b77fe8c4c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   f76570df814da       kube-proxy-l2f8r
	2b9a993eb9457       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   1957784acd36b       kube-controller-manager-multinode-986999
	a018e5ee7d099       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   653a47cf7bd0a       kube-apiserver-multinode-986999
	ca92dab795508       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   d8dddb3cbe2ea       kube-scheduler-multinode-986999
	1112d232a855d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   4f9c1cc51cafc       etcd-multinode-986999
	0e440bc0c9c2c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   c17ef5766c346       busybox-7dff88458-2skwv
	477bf50a43a44       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   a9ffa8acdb931       coredns-6f6b679f8f-sxtq9
	53655be94b95a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   5ae877f7722e7       storage-provisioner
	7fa4efe1c9de6       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   be28d4077d679       kindnet-pd9v2
	065061677ad51       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      11 minutes ago      Exited              kube-proxy                0                   6c7ad039d313b       kube-proxy-l2f8r
	8854bb6d7d4f1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      11 minutes ago      Exited              etcd                      0                   e9bd2d388e99b       etcd-multinode-986999
	8dca3959236fa       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      11 minutes ago      Exited              kube-apiserver            0                   a9d0f10c7e347       kube-apiserver-multinode-986999
	6bd57a8e25a7e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      11 minutes ago      Exited              kube-scheduler            0                   6f263cd667e02       kube-scheduler-multinode-986999
	89325e75b717c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      11 minutes ago      Exited              kube-controller-manager   0                   750456da3a006       kube-controller-manager-multinode-986999
	
	
	==> coredns [093e81907d400a0e8ad10bcf1345d2cda5c5998f3d2e270183919eeed79d16c9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33358 - 51070 "HINFO IN 9118466147107003365.2558687126417989913. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015072085s
	
	
	==> coredns [477bf50a43a44b525697100bbd3506a1022f2934122c5597ce12502e68c5edd6] <==
	[INFO] 10.244.1.2:58553 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001524963s
	[INFO] 10.244.1.2:34174 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011503s
	[INFO] 10.244.1.2:34890 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061163s
	[INFO] 10.244.1.2:60403 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001044386s
	[INFO] 10.244.1.2:59354 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087562s
	[INFO] 10.244.1.2:55946 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090177s
	[INFO] 10.244.1.2:53971 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069688s
	[INFO] 10.244.0.3:52297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115786s
	[INFO] 10.244.0.3:36077 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060413s
	[INFO] 10.244.0.3:56632 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078159s
	[INFO] 10.244.0.3:37016 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000064659s
	[INFO] 10.244.1.2:45446 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140185s
	[INFO] 10.244.1.2:34404 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010402s
	[INFO] 10.244.1.2:49653 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078768s
	[INFO] 10.244.1.2:41303 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063224s
	[INFO] 10.244.0.3:56483 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152376s
	[INFO] 10.244.0.3:56184 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155033s
	[INFO] 10.244.0.3:55676 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000090162s
	[INFO] 10.244.0.3:54612 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000115457s
	[INFO] 10.244.1.2:60739 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144412s
	[INFO] 10.244.1.2:41073 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000079545s
	[INFO] 10.244.1.2:36117 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00005353s
	[INFO] 10.244.1.2:45825 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000051833s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-986999
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-986999
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=multinode-986999
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T16_54_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 16:54:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-986999
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 17:05:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 17:02:03 +0000   Wed, 14 Aug 2024 16:54:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 17:02:03 +0000   Wed, 14 Aug 2024 16:54:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 17:02:03 +0000   Wed, 14 Aug 2024 16:54:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 17:02:03 +0000   Wed, 14 Aug 2024 16:55:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.36
	  Hostname:    multinode-986999
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b4338ad0ff74c569f7865e4276ec804
	  System UUID:                8b4338ad-0ff7-4c56-9f78-65e4276ec804
	  Boot ID:                    8dfea163-0bba-4fa4-8bd9-627d2be7c5a6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2skwv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                 coredns-6f6b679f8f-sxtq9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 etcd-multinode-986999                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-pd9v2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-multinode-986999             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-986999    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-l2f8r                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-multinode-986999             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 11m                  kube-proxy       
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node multinode-986999 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node multinode-986999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)    kubelet          Node multinode-986999 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                  kubelet          Node multinode-986999 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                  kubelet          Node multinode-986999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                  kubelet          Node multinode-986999 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                  node-controller  Node multinode-986999 event: Registered Node multinode-986999 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-986999 status is now: NodeReady
	  Normal  Starting                 4m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node multinode-986999 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node multinode-986999 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node multinode-986999 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m59s                node-controller  Node multinode-986999 event: Registered Node multinode-986999 in Controller
	
	
	Name:               multinode-986999-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-986999-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=multinode-986999
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_14T17_02_42_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 17:02:41 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-986999-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 17:03:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 14 Aug 2024 17:03:12 +0000   Wed, 14 Aug 2024 17:04:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 14 Aug 2024 17:03:12 +0000   Wed, 14 Aug 2024 17:04:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 14 Aug 2024 17:03:12 +0000   Wed, 14 Aug 2024 17:04:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 14 Aug 2024 17:03:12 +0000   Wed, 14 Aug 2024 17:04:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    multinode-986999-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1f7ca59b9c5475f8911ae2c26758d51
	  System UUID:                e1f7ca59-b9c5-475f-8911-ae2c26758d51
	  Boot ID:                    97e9ef56-3c6c-469b-a4ef-4e1fb4f914a1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6b2gm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-ndvs5              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-5dgq9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-986999-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-986999-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-986999-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m27s                  kubelet          Node multinode-986999-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-986999-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-986999-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-986999-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-986999-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                    node-controller  Node multinode-986999-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.059899] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067155] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.168725] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.141067] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.288753] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.856843] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +4.583370] systemd-fstab-generator[897]: Ignoring "noauto" option for root device
	[  +0.058974] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.987951] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.097983] kauditd_printk_skb: 69 callbacks suppressed
	[Aug14 16:55] systemd-fstab-generator[1343]: Ignoring "noauto" option for root device
	[  +0.114645] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.039058] kauditd_printk_skb: 67 callbacks suppressed
	[Aug14 16:56] kauditd_printk_skb: 14 callbacks suppressed
	[Aug14 17:01] systemd-fstab-generator[2726]: Ignoring "noauto" option for root device
	[  +0.151499] systemd-fstab-generator[2738]: Ignoring "noauto" option for root device
	[  +0.169748] systemd-fstab-generator[2752]: Ignoring "noauto" option for root device
	[  +0.141227] systemd-fstab-generator[2764]: Ignoring "noauto" option for root device
	[  +0.275942] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.680551] systemd-fstab-generator[2898]: Ignoring "noauto" option for root device
	[  +1.545289] systemd-fstab-generator[3019]: Ignoring "noauto" option for root device
	[Aug14 17:02] kauditd_printk_skb: 184 callbacks suppressed
	[  +9.902708] kauditd_printk_skb: 34 callbacks suppressed
	[  +2.962734] systemd-fstab-generator[3866]: Ignoring "noauto" option for root device
	[ +20.933494] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1112d232a855de91079496d16db1b2dad08932f18afaf02e62ccf6f32bd12429] <==
	{"level":"info","ts":"2024-08-14T17:02:01.400307Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"4bc1bccd4ea9d8cb","local-member-id":"74e924d55c832457","added-peer-id":"74e924d55c832457","added-peer-peer-urls":["https://192.168.39.36:2380"]}
	{"level":"info","ts":"2024-08-14T17:02:01.400421Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"4bc1bccd4ea9d8cb","local-member-id":"74e924d55c832457","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:02:01.400463Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:02:01.410539Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:02:01.415745Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-14T17:02:01.418010Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"74e924d55c832457","initial-advertise-peer-urls":["https://192.168.39.36:2380"],"listen-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.36:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-14T17:02:01.418050Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T17:02:01.418175Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-08-14T17:02:01.418194Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-08-14T17:02:02.580603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-14T17:02:02.580674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-14T17:02:02.580691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 received MsgPreVoteResp from 74e924d55c832457 at term 2"}
	{"level":"info","ts":"2024-08-14T17:02:02.580703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became candidate at term 3"}
	{"level":"info","ts":"2024-08-14T17:02:02.580710Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 received MsgVoteResp from 74e924d55c832457 at term 3"}
	{"level":"info","ts":"2024-08-14T17:02:02.580719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"74e924d55c832457 became leader at term 3"}
	{"level":"info","ts":"2024-08-14T17:02:02.580751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 74e924d55c832457 elected leader 74e924d55c832457 at term 3"}
	{"level":"info","ts":"2024-08-14T17:02:02.586163Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"74e924d55c832457","local-member-attributes":"{Name:multinode-986999 ClientURLs:[https://192.168.39.36:2379]}","request-path":"/0/members/74e924d55c832457/attributes","cluster-id":"4bc1bccd4ea9d8cb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T17:02:02.586180Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T17:02:02.586463Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T17:02:02.586911Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T17:02:02.586943Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T17:02:02.587609Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:02:02.587609Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:02:02.588579Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.36:2379"}
	{"level":"info","ts":"2024-08-14T17:02:02.588782Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [8854bb6d7d4f172c02bb83aacb5d9afaf0c590d34b13261a6fee5df665395c1c] <==
	{"level":"info","ts":"2024-08-14T16:54:53.647316Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T16:54:53.649628Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T16:54:53.652604Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T16:54:53.654237Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.36:2379"}
	{"level":"info","ts":"2024-08-14T16:54:53.671704Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T16:54:53.671736Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-08-14T16:55:50.055318Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.987271ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2618721488912790211 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-986999-m02.17eba6ae44e60e9f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-986999-m02.17eba6ae44e60e9f\" value_size:646 lease:2618721488912789199 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-14T16:55:50.055675Z","caller":"traceutil/trace.go:171","msg":"trace[1966739086] transaction","detail":"{read_only:false; response_revision:480; number_of_response:1; }","duration":"228.08287ms","start":"2024-08-14T16:55:49.827567Z","end":"2024-08-14T16:55:50.055650Z","steps":["trace[1966739086] 'process raft request'  (duration: 79.397328ms)","trace[1966739086] 'compare'  (duration: 147.821656ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-14T16:55:56.662732Z","caller":"traceutil/trace.go:171","msg":"trace[868545671] transaction","detail":"{read_only:false; response_revision:514; number_of_response:1; }","duration":"112.58842ms","start":"2024-08-14T16:55:56.550125Z","end":"2024-08-14T16:55:56.662713Z","steps":["trace[868545671] 'process raft request'  (duration: 112.461145ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:55:57.233129Z","caller":"traceutil/trace.go:171","msg":"trace[299387947] linearizableReadLoop","detail":"{readStateIndex:536; appliedIndex:535; }","duration":"208.801852ms","start":"2024-08-14T16:55:57.024313Z","end":"2024-08-14T16:55:57.233115Z","steps":["trace[299387947] 'read index received'  (duration: 208.681574ms)","trace[299387947] 'applied index is now lower than readState.Index'  (duration: 119.462µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T16:55:57.233348Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.015427ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-986999-m02\" ","response":"range_response_count:1 size:2885"}
	{"level":"info","ts":"2024-08-14T16:55:57.233392Z","caller":"traceutil/trace.go:171","msg":"trace[975187225] range","detail":"{range_begin:/registry/minions/multinode-986999-m02; range_end:; response_count:1; response_revision:514; }","duration":"209.074953ms","start":"2024-08-14T16:55:57.024310Z","end":"2024-08-14T16:55:57.233385Z","steps":["trace[975187225] 'agreement among raft nodes before linearized reading'  (duration: 208.952448ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T16:57:12.500328Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.056811ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T16:57:12.500401Z","caller":"traceutil/trace.go:171","msg":"trace[1890852243] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:640; }","duration":"131.184918ms","start":"2024-08-14T16:57:12.369201Z","end":"2024-08-14T16:57:12.500386Z","steps":["trace[1890852243] 'range keys from in-memory index tree'  (duration: 131.036668ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T16:57:12.500459Z","caller":"traceutil/trace.go:171","msg":"trace[1409628696] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"210.780232ms","start":"2024-08-14T16:57:12.289669Z","end":"2024-08-14T16:57:12.500449Z","steps":["trace[1409628696] 'process raft request'  (duration: 126.714541ms)","trace[1409628696] 'compare'  (duration: 83.820132ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-14T17:00:26.067348Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-14T17:00:26.067456Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-986999","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"]}
	{"level":"warn","ts":"2024-08-14T17:00:26.067560Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T17:00:26.067674Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T17:00:26.126501Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.36:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-14T17:00:26.126550Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.36:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-14T17:00:26.128283Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"74e924d55c832457","current-leader-member-id":"74e924d55c832457"}
	{"level":"info","ts":"2024-08-14T17:00:26.130875Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-08-14T17:00:26.131083Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.36:2380"}
	{"level":"info","ts":"2024-08-14T17:00:26.131116Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-986999","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.36:2380"],"advertise-client-urls":["https://192.168.39.36:2379"]}
	
	
	==> kernel <==
	 17:06:07 up 11 min,  0 users,  load average: 0.05, 0.17, 0.11
	Linux multinode-986999 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7fa4efe1c9de6af2d6d7702dd349fb63b55826db5265800efbceee44e46f1c15] <==
	I0814 16:59:38.461055       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	I0814 16:59:48.457517       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 16:59:48.457598       1 main.go:299] handling current node
	I0814 16:59:48.457638       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 16:59:48.457646       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 16:59:48.457803       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0814 16:59:48.457823       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	I0814 16:59:58.459549       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 16:59:58.459652       1 main.go:299] handling current node
	I0814 16:59:58.459685       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 16:59:58.459703       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 16:59:58.459946       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0814 16:59:58.459981       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	I0814 17:00:08.455872       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:00:08.455959       1 main.go:299] handling current node
	I0814 17:00:08.455977       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:00:08.455983       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:00:08.456148       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0814 17:00:08.456169       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	I0814 17:00:18.464328       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:00:18.464456       1 main.go:299] handling current node
	I0814 17:00:18.464496       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:00:18.464502       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:00:18.464713       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I0814 17:00:18.464721       1 main.go:322] Node multinode-986999-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [b3b00eff2e1f5a6ebafac3003a2f80b57798117d69a2cb39aab343f964cace12] <==
	I0814 17:05:06.050091       1 main.go:299] handling current node
	I0814 17:05:16.054054       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:05:16.054099       1 main.go:299] handling current node
	I0814 17:05:16.054127       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:05:16.054133       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:05:26.055012       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:05:26.055109       1 main.go:299] handling current node
	I0814 17:05:26.055137       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:05:26.055154       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:05:36.049403       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:05:36.049450       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:05:36.049627       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:05:36.049649       1 main.go:299] handling current node
	I0814 17:05:46.049522       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:05:46.049610       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:05:46.049959       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:05:46.049999       1 main.go:299] handling current node
	I0814 17:05:56.051093       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:05:56.051199       1 main.go:299] handling current node
	I0814 17:05:56.051235       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:05:56.051253       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	I0814 17:06:06.049973       1 main.go:295] Handling node with IPs: map[192.168.39.36:{}]
	I0814 17:06:06.050029       1 main.go:299] handling current node
	I0814 17:06:06.050050       1 main.go:295] Handling node with IPs: map[192.168.39.2:{}]
	I0814 17:06:06.050055       1 main.go:322] Node multinode-986999-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8dca3959236fa87a0d1b48f33075ee8214b4096eb933a3a7a6c54466009360d6] <==
	I0814 16:54:56.321499       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0814 16:54:56.321533       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0814 16:54:56.878000       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 16:54:56.929486       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 16:54:57.028363       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0814 16:54:57.043628       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.36]
	I0814 16:54:57.044724       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 16:54:57.052015       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0814 16:54:57.382497       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0814 16:54:57.826291       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0814 16:54:57.850579       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0814 16:54:57.859201       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0814 16:55:02.885484       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0814 16:55:03.151069       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0814 16:56:45.723455       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39616: use of closed network connection
	E0814 16:56:45.889742       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39628: use of closed network connection
	E0814 16:56:46.071958       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39644: use of closed network connection
	E0814 16:56:46.234408       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39670: use of closed network connection
	E0814 16:56:46.403309       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39684: use of closed network connection
	E0814 16:56:46.563579       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39690: use of closed network connection
	E0814 16:56:46.868373       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39710: use of closed network connection
	E0814 16:56:47.040370       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39720: use of closed network connection
	E0814 16:56:47.208082       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39738: use of closed network connection
	E0814 16:56:47.370251       1 conn.go:339] Error on socket receive: read tcp 192.168.39.36:8443->192.168.39.1:39752: use of closed network connection
	I0814 17:00:26.063195       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [a018e5ee7d09971a63b0a8f3373f4295514885455f4d14e303d0475276c613f1] <==
	I0814 17:02:03.852322       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 17:02:03.852425       1 policy_source.go:224] refreshing policies
	I0814 17:02:03.861269       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0814 17:02:03.863489       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0814 17:02:03.863554       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 17:02:03.867704       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0814 17:02:03.868238       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0814 17:02:03.869172       1 shared_informer.go:320] Caches are synced for configmaps
	I0814 17:02:03.873352       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0814 17:02:03.873641       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0814 17:02:03.873703       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0814 17:02:03.881973       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 17:02:03.903819       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0814 17:02:03.904019       1 aggregator.go:171] initial CRD sync complete...
	I0814 17:02:03.904050       1 autoregister_controller.go:144] Starting autoregister controller
	I0814 17:02:03.904056       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0814 17:02:03.904061       1 cache.go:39] Caches are synced for autoregister controller
	I0814 17:02:04.782158       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0814 17:02:05.979606       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0814 17:02:06.111177       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0814 17:02:06.122495       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0814 17:02:06.185112       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 17:02:06.194220       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 17:02:07.326795       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 17:02:07.518741       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2b9a993eb9457bbb830323abdb835c9e4cc6ee50aed085f14af5c2228577a473] <==
	I0814 17:03:20.828717       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-986999-m03" podCIDRs=["10.244.2.0/24"]
	I0814 17:03:20.829371       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:20.829486       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:20.840120       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:20.858167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:21.213335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:22.247596       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:31.258358       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:40.212347       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:40.212618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 17:03:40.224561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:42.217241       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:44.821228       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:44.842717       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:45.280107       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 17:03:45.280202       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 17:04:27.181617       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-zn75c"
	I0814 17:04:27.214363       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-zn75c"
	I0814 17:04:27.214583       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-68bq4"
	I0814 17:04:27.238178       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m02"
	I0814 17:04:27.257233       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m02"
	I0814 17:04:27.259393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.362041ms"
	I0814 17:04:27.259781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="33.869µs"
	I0814 17:04:27.275527       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-68bq4"
	I0814 17:04:32.304845       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m02"
	
	
	==> kube-controller-manager [89325e75b717c86ed94903534b0598617ea1032caaea85f0abed3f882861d08b] <==
	I0814 16:58:00.822683       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 16:58:00.822697       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:01.925482       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-986999-m03\" does not exist"
	I0814 16:58:01.930636       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 16:58:01.935484       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-986999-m03" podCIDRs=["10.244.4.0/24"]
	I0814 16:58:01.935947       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:01.936136       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:01.947652       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:02.305000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:02.340071       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:02.616647       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:12.217023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:20.291349       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:20.291345       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 16:58:20.301711       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:58:22.304277       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:59:07.322913       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-986999-m02"
	I0814 16:59:07.323288       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:59:07.331003       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m02"
	I0814 16:59:07.344172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:59:07.349982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m02"
	I0814 16:59:07.390940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.982677ms"
	I0814 16:59:07.391521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.908µs"
	I0814 16:59:12.491493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m03"
	I0814 16:59:22.562405       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-986999-m02"
	
	
	==> kube-proxy [065061677ad516a0b1bc60bb13906bca0dfc23e9a5febf090083ea2966988d14] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 16:55:04.733623       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 16:55:04.743192       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	E0814 16:55:04.743285       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 16:55:04.770212       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 16:55:04.770345       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 16:55:04.770388       1 server_linux.go:169] "Using iptables Proxier"
	I0814 16:55:04.772509       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 16:55:04.772838       1 server.go:483] "Version info" version="v1.31.0"
	I0814 16:55:04.772980       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 16:55:04.774470       1 config.go:197] "Starting service config controller"
	I0814 16:55:04.774507       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 16:55:04.774528       1 config.go:104] "Starting endpoint slice config controller"
	I0814 16:55:04.774532       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 16:55:04.776243       1 config.go:326] "Starting node config controller"
	I0814 16:55:04.776270       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 16:55:04.874702       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 16:55:04.874717       1 shared_informer.go:320] Caches are synced for service config
	I0814 16:55:04.876421       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c8e4b77fe8c4c74a9ab92cafdf2ebee61958c4f16d8258caf39d207a7f149da3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 17:02:05.313064       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 17:02:05.327157       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.36"]
	E0814 17:02:05.330274       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 17:02:05.404169       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 17:02:05.404258       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 17:02:05.404299       1 server_linux.go:169] "Using iptables Proxier"
	I0814 17:02:05.406514       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 17:02:05.406832       1 server.go:483] "Version info" version="v1.31.0"
	I0814 17:02:05.407045       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:02:05.408697       1 config.go:197] "Starting service config controller"
	I0814 17:02:05.409049       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 17:02:05.409187       1 config.go:104] "Starting endpoint slice config controller"
	I0814 17:02:05.409229       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 17:02:05.409719       1 config.go:326] "Starting node config controller"
	I0814 17:02:05.409755       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 17:02:05.510161       1 shared_informer.go:320] Caches are synced for node config
	I0814 17:02:05.510208       1 shared_informer.go:320] Caches are synced for service config
	I0814 17:02:05.510235       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6bd57a8e25a7ee065c30e3a842e9a8e694dee3572fa7e30bbcc0263ca9b54391] <==
	E0814 16:54:55.396646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.319636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 16:54:56.319685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.327490       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0814 16:54:56.327546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.390167       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 16:54:56.390213       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 16:54:56.439514       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 16:54:56.439561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.537242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0814 16:54:56.537304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.599250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 16:54:56.599299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.605730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 16:54:56.605781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.619032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 16:54:56.619081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.625089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 16:54:56.625132       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.665398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 16:54:56.665433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 16:54:56.691909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 16:54:56.692049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0814 16:54:59.290045       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0814 17:00:26.077834       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ca92dab795508a7cd6103305623d25d2fffaa671df4ba15094a97c1296844947] <==
	I0814 17:02:01.980216       1 serving.go:386] Generated self-signed cert in-memory
	W0814 17:02:03.816002       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0814 17:02:03.816216       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 17:02:03.816293       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 17:02:03.816325       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 17:02:03.886874       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0814 17:02:03.887107       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:02:03.891485       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0814 17:02:03.891625       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 17:02:03.891669       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 17:02:03.891702       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 17:02:03.992076       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 17:04:50 multinode-986999 kubelet[3026]: E0814 17:04:50.524781    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655090524442158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:05:00 multinode-986999 kubelet[3026]: E0814 17:05:00.425358    3026 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 17:05:00 multinode-986999 kubelet[3026]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 17:05:00 multinode-986999 kubelet[3026]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 17:05:00 multinode-986999 kubelet[3026]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 17:05:00 multinode-986999 kubelet[3026]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 17:05:00 multinode-986999 kubelet[3026]: E0814 17:05:00.526782    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655100526564149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:05:00 multinode-986999 kubelet[3026]: E0814 17:05:00.526826    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655100526564149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:05:10 multinode-986999 kubelet[3026]: E0814 17:05:10.529237    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655110528268504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:05:10 multinode-986999 kubelet[3026]: E0814 17:05:10.529524    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655110528268504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:05:20 multinode-986999 kubelet[3026]: E0814 17:05:20.533053    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655120531419986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:05:20 multinode-986999 kubelet[3026]: E0814 17:05:20.533596    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655120531419986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:05:30 multinode-986999 kubelet[3026]: E0814 17:05:30.535640    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655130534526860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:05:30 multinode-986999 kubelet[3026]: E0814 17:05:30.535667    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655130534526860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:05:40 multinode-986999 kubelet[3026]: E0814 17:05:40.537236    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655140536953838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:05:40 multinode-986999 kubelet[3026]: E0814 17:05:40.537549    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655140536953838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:05:50 multinode-986999 kubelet[3026]: E0814 17:05:50.539171    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655150538759678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:05:50 multinode-986999 kubelet[3026]: E0814 17:05:50.539233    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655150538759678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:06:00 multinode-986999 kubelet[3026]: E0814 17:06:00.423142    3026 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 17:06:00 multinode-986999 kubelet[3026]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 17:06:00 multinode-986999 kubelet[3026]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 17:06:00 multinode-986999 kubelet[3026]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 17:06:00 multinode-986999 kubelet[3026]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 17:06:00 multinode-986999 kubelet[3026]: E0814 17:06:00.544037    3026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655160543003652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:06:00 multinode-986999 kubelet[3026]: E0814 17:06:00.544075    3026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723655160543003652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:06:06.145482   52088 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19446-13977/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-986999 -n multinode-986999
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-986999 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.34s)

                                                
                                    
x
+
TestPreload (336.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-116316 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0814 17:12:45.658867   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:13:02.589480   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-116316 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m14.076702773s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-116316 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-116316 image pull gcr.io/k8s-minikube/busybox: (2.794574421s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-116316
E0814 17:14:29.462159   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-116316: exit status 82 (2m0.472136908s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-116316"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-116316 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-14 17:15:06.962913647 +0000 UTC m=+3940.648196443
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-116316 -n test-preload-116316
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-116316 -n test-preload-116316: exit status 3 (18.47600604s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:15:25.435678   55220 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host
	E0814 17:15:25.435705   55220 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.76:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-116316" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-116316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-116316
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-116316: (1.097627654s)
--- FAIL: TestPreload (336.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (402.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-422555 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0814 17:18:02.588929   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-422555 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m16.925749704s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-422555] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-422555" primary control-plane node in "kubernetes-upgrade-422555" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 17:18:01.931158   59595 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:18:01.931475   59595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:18:01.931485   59595 out.go:304] Setting ErrFile to fd 2...
	I0814 17:18:01.931490   59595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:18:01.931683   59595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:18:01.932212   59595 out.go:298] Setting JSON to false
	I0814 17:18:01.933067   59595 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7226,"bootTime":1723648656,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:18:01.933124   59595 start.go:139] virtualization: kvm guest
	I0814 17:18:01.935259   59595 out.go:177] * [kubernetes-upgrade-422555] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:18:01.936433   59595 notify.go:220] Checking for updates...
	I0814 17:18:01.936453   59595 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:18:01.937745   59595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:18:01.938880   59595 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:18:01.939997   59595 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:18:01.941413   59595 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:18:01.942959   59595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:18:01.944548   59595 config.go:182] Loaded profile config "NoKubernetes-009758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:18:01.944648   59595 config.go:182] Loaded profile config "offline-crio-972905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:18:01.944724   59595 config.go:182] Loaded profile config "running-upgrade-706037": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0814 17:18:01.944790   59595 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:18:01.979220   59595 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 17:18:01.980394   59595 start.go:297] selected driver: kvm2
	I0814 17:18:01.980410   59595 start.go:901] validating driver "kvm2" against <nil>
	I0814 17:18:01.980421   59595 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:18:01.981097   59595 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:18:01.981158   59595 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:18:01.995879   59595 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:18:01.995935   59595 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 17:18:01.996181   59595 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 17:18:01.996217   59595 cni.go:84] Creating CNI manager for ""
	I0814 17:18:01.996228   59595 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:18:01.996246   59595 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 17:18:01.996318   59595 start.go:340] cluster config:
	{Name:kubernetes-upgrade-422555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-422555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:18:01.996449   59595 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:18:01.998181   59595 out.go:177] * Starting "kubernetes-upgrade-422555" primary control-plane node in "kubernetes-upgrade-422555" cluster
	I0814 17:18:01.999626   59595 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:18:01.999665   59595 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:18:01.999672   59595 cache.go:56] Caching tarball of preloaded images
	I0814 17:18:01.999755   59595 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:18:01.999766   59595 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 17:18:01.999903   59595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/config.json ...
	I0814 17:18:01.999926   59595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/config.json: {Name:mkb251fc4b116c2da62106f7e405bd9e61236ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:18:02.000087   59595 start.go:360] acquireMachinesLock for kubernetes-upgrade-422555: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:18:49.536500   59595 start.go:364] duration metric: took 47.536366827s to acquireMachinesLock for "kubernetes-upgrade-422555"
	I0814 17:18:49.536577   59595 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-422555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-422555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:18:49.536672   59595 start.go:125] createHost starting for "" (driver="kvm2")
	I0814 17:18:49.538648   59595 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 17:18:49.538867   59595 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:18:49.538917   59595 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:18:49.559430   59595 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35043
	I0814 17:18:49.560025   59595 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:18:49.560650   59595 main.go:141] libmachine: Using API Version  1
	I0814 17:18:49.560685   59595 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:18:49.561099   59595 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:18:49.561292   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetMachineName
	I0814 17:18:49.561502   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:18:49.561640   59595 start.go:159] libmachine.API.Create for "kubernetes-upgrade-422555" (driver="kvm2")
	I0814 17:18:49.561677   59595 client.go:168] LocalClient.Create starting
	I0814 17:18:49.561713   59595 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem
	I0814 17:18:49.561755   59595 main.go:141] libmachine: Decoding PEM data...
	I0814 17:18:49.561780   59595 main.go:141] libmachine: Parsing certificate...
	I0814 17:18:49.561842   59595 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem
	I0814 17:18:49.561867   59595 main.go:141] libmachine: Decoding PEM data...
	I0814 17:18:49.561883   59595 main.go:141] libmachine: Parsing certificate...
	I0814 17:18:49.561907   59595 main.go:141] libmachine: Running pre-create checks...
	I0814 17:18:49.561919   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .PreCreateCheck
	I0814 17:18:49.562363   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetConfigRaw
	I0814 17:18:49.562775   59595 main.go:141] libmachine: Creating machine...
	I0814 17:18:49.562791   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .Create
	I0814 17:18:49.562960   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Creating KVM machine...
	I0814 17:18:49.564540   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found existing default KVM network
	I0814 17:18:49.566120   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:49.565941   60269 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:90:88:8e} reservation:<nil>}
	I0814 17:18:49.567117   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:49.567041   60269 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:33:44:f8} reservation:<nil>}
	I0814 17:18:49.568391   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:49.568304   60269 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:82:3c:04} reservation:<nil>}
	I0814 17:18:49.569590   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:49.569506   60269 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000309aa0}
	I0814 17:18:49.569642   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | created network xml: 
	I0814 17:18:49.569663   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | <network>
	I0814 17:18:49.569674   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG |   <name>mk-kubernetes-upgrade-422555</name>
	I0814 17:18:49.569687   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG |   <dns enable='no'/>
	I0814 17:18:49.569695   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG |   
	I0814 17:18:49.569704   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0814 17:18:49.569714   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG |     <dhcp>
	I0814 17:18:49.569723   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0814 17:18:49.569756   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG |     </dhcp>
	I0814 17:18:49.569780   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG |   </ip>
	I0814 17:18:49.569800   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG |   
	I0814 17:18:49.569808   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | </network>
	I0814 17:18:49.569822   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | 
	I0814 17:18:49.575987   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | trying to create private KVM network mk-kubernetes-upgrade-422555 192.168.72.0/24...
	I0814 17:18:49.655822   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | private KVM network mk-kubernetes-upgrade-422555 192.168.72.0/24 created
	I0814 17:18:49.655864   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Setting up store path in /home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555 ...
	I0814 17:18:49.655884   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:49.655787   60269 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:18:49.655908   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Building disk image from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 17:18:49.655928   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Downloading /home/jenkins/minikube-integration/19446-13977/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 17:18:49.902920   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:49.902780   60269 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/id_rsa...
	I0814 17:18:50.034091   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:50.033942   60269 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/kubernetes-upgrade-422555.rawdisk...
	I0814 17:18:50.034138   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Writing magic tar header
	I0814 17:18:50.034153   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Writing SSH key tar header
	I0814 17:18:50.034162   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:50.034078   60269 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555 ...
	I0814 17:18:50.034208   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555
	I0814 17:18:50.034236   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines
	I0814 17:18:50.034256   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:18:50.034270   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555 (perms=drwx------)
	I0814 17:18:50.034283   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines (perms=drwxr-xr-x)
	I0814 17:18:50.034296   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube (perms=drwxr-xr-x)
	I0814 17:18:50.034314   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977 (perms=drwxrwxr-x)
	I0814 17:18:50.034328   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 17:18:50.034342   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977
	I0814 17:18:50.034362   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 17:18:50.034378   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 17:18:50.034392   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Creating domain...
	I0814 17:18:50.034403   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Checking permissions on dir: /home/jenkins
	I0814 17:18:50.034414   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Checking permissions on dir: /home
	I0814 17:18:50.034432   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Skipping /home - not owner
	I0814 17:18:50.035737   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) define libvirt domain using xml: 
	I0814 17:18:50.035762   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) <domain type='kvm'>
	I0814 17:18:50.035776   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)   <name>kubernetes-upgrade-422555</name>
	I0814 17:18:50.035791   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)   <memory unit='MiB'>2200</memory>
	I0814 17:18:50.035806   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)   <vcpu>2</vcpu>
	I0814 17:18:50.035818   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)   <features>
	I0814 17:18:50.035844   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <acpi/>
	I0814 17:18:50.035873   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <apic/>
	I0814 17:18:50.035901   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <pae/>
	I0814 17:18:50.035918   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     
	I0814 17:18:50.035955   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)   </features>
	I0814 17:18:50.035983   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)   <cpu mode='host-passthrough'>
	I0814 17:18:50.035994   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)   
	I0814 17:18:50.036001   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)   </cpu>
	I0814 17:18:50.036022   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)   <os>
	I0814 17:18:50.036035   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <type>hvm</type>
	I0814 17:18:50.036044   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <boot dev='cdrom'/>
	I0814 17:18:50.036051   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <boot dev='hd'/>
	I0814 17:18:50.036061   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <bootmenu enable='no'/>
	I0814 17:18:50.036068   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)   </os>
	I0814 17:18:50.036077   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)   <devices>
	I0814 17:18:50.036090   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <disk type='file' device='cdrom'>
	I0814 17:18:50.036100   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/boot2docker.iso'/>
	I0814 17:18:50.036109   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <target dev='hdc' bus='scsi'/>
	I0814 17:18:50.036158   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <readonly/>
	I0814 17:18:50.036184   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     </disk>
	I0814 17:18:50.036199   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <disk type='file' device='disk'>
	I0814 17:18:50.036214   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 17:18:50.036233   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/kubernetes-upgrade-422555.rawdisk'/>
	I0814 17:18:50.036245   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <target dev='hda' bus='virtio'/>
	I0814 17:18:50.036262   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     </disk>
	I0814 17:18:50.036275   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <interface type='network'>
	I0814 17:18:50.036289   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <source network='mk-kubernetes-upgrade-422555'/>
	I0814 17:18:50.036301   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <model type='virtio'/>
	I0814 17:18:50.036314   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     </interface>
	I0814 17:18:50.036326   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <interface type='network'>
	I0814 17:18:50.036340   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <source network='default'/>
	I0814 17:18:50.036352   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <model type='virtio'/>
	I0814 17:18:50.036364   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     </interface>
	I0814 17:18:50.036375   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <serial type='pty'>
	I0814 17:18:50.036388   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <target port='0'/>
	I0814 17:18:50.036403   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     </serial>
	I0814 17:18:50.036416   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <console type='pty'>
	I0814 17:18:50.036425   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <target type='serial' port='0'/>
	I0814 17:18:50.036434   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     </console>
	I0814 17:18:50.036446   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     <rng model='virtio'>
	I0814 17:18:50.036460   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)       <backend model='random'>/dev/random</backend>
	I0814 17:18:50.036470   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     </rng>
	I0814 17:18:50.036482   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     
	I0814 17:18:50.036495   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)     
	I0814 17:18:50.036506   59595 main.go:141] libmachine: (kubernetes-upgrade-422555)   </devices>
	I0814 17:18:50.036516   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) </domain>
	I0814 17:18:50.036527   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) 
	I0814 17:18:50.041741   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:12:15:13 in network default
	I0814 17:18:50.042605   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Ensuring networks are active...
	I0814 17:18:50.042626   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:18:50.043503   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Ensuring network default is active
	I0814 17:18:50.043906   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Ensuring network mk-kubernetes-upgrade-422555 is active
	I0814 17:18:50.044604   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Getting domain xml...
	I0814 17:18:50.045539   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Creating domain...
	I0814 17:18:51.292099   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Waiting to get IP...
	I0814 17:18:51.293070   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:18:51.293608   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:18:51.293648   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:51.293580   60269 retry.go:31] will retry after 247.846332ms: waiting for machine to come up
	I0814 17:18:51.542881   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:18:51.543454   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:18:51.543486   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:51.543401   60269 retry.go:31] will retry after 254.362308ms: waiting for machine to come up
	I0814 17:18:51.799737   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:18:51.800234   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:18:51.800264   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:51.800175   60269 retry.go:31] will retry after 436.43784ms: waiting for machine to come up
	I0814 17:18:52.238422   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:18:52.238875   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:18:52.238932   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:52.238852   60269 retry.go:31] will retry after 563.660266ms: waiting for machine to come up
	I0814 17:18:52.804804   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:18:52.805325   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:18:52.805356   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:52.805280   60269 retry.go:31] will retry after 740.787323ms: waiting for machine to come up
	I0814 17:18:53.547187   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:18:53.547672   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:18:53.547714   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:53.547631   60269 retry.go:31] will retry after 603.549181ms: waiting for machine to come up
	I0814 17:18:54.152838   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:18:54.153364   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:18:54.153392   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:54.153273   60269 retry.go:31] will retry after 835.426658ms: waiting for machine to come up
	I0814 17:18:54.991024   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:18:54.991688   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:18:54.991720   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:54.991638   60269 retry.go:31] will retry after 1.456806524s: waiting for machine to come up
	I0814 17:18:56.450294   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:18:56.450864   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:18:56.450909   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:56.450816   60269 retry.go:31] will retry after 1.348350698s: waiting for machine to come up
	I0814 17:18:57.800387   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:18:57.800877   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:18:57.800902   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:57.800821   60269 retry.go:31] will retry after 1.877149684s: waiting for machine to come up
	I0814 17:18:59.679660   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:18:59.680379   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:18:59.680406   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:18:59.680306   60269 retry.go:31] will retry after 2.552171572s: waiting for machine to come up
	I0814 17:19:02.235962   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:02.236569   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:19:02.236606   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:19:02.236519   60269 retry.go:31] will retry after 2.670867197s: waiting for machine to come up
	I0814 17:19:04.909038   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:04.909484   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:19:04.909501   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:19:04.909448   60269 retry.go:31] will retry after 4.005803716s: waiting for machine to come up
	I0814 17:19:08.916402   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:08.916919   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find current IP address of domain kubernetes-upgrade-422555 in network mk-kubernetes-upgrade-422555
	I0814 17:19:08.916944   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | I0814 17:19:08.916847   60269 retry.go:31] will retry after 3.858007532s: waiting for machine to come up
	I0814 17:19:12.776102   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:12.776724   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Found IP for machine: 192.168.72.9
	I0814 17:19:12.776756   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has current primary IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:12.776765   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Reserving static IP address...
	I0814 17:19:12.777255   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-422555", mac: "52:54:00:7b:c9:3b", ip: "192.168.72.9"} in network mk-kubernetes-upgrade-422555
	I0814 17:19:12.853896   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Getting to WaitForSSH function...
	I0814 17:19:12.853952   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Reserved static IP address: 192.168.72.9
	I0814 17:19:12.853968   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Waiting for SSH to be available...
	I0814 17:19:12.856961   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:12.857401   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:12.857432   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:12.857536   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Using SSH client type: external
	I0814 17:19:12.857565   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/id_rsa (-rw-------)
	I0814 17:19:12.857594   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.9 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:19:12.857607   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | About to run SSH command:
	I0814 17:19:12.857627   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | exit 0
	I0814 17:19:12.979401   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | SSH cmd err, output: <nil>: 
	I0814 17:19:12.979632   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) KVM machine creation complete!
	I0814 17:19:12.980056   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetConfigRaw
	I0814 17:19:12.980703   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:19:12.980935   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:19:12.981138   59595 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 17:19:12.981154   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetState
	I0814 17:19:12.982528   59595 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 17:19:12.982549   59595 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 17:19:12.982569   59595 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 17:19:12.982578   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:19:12.985472   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:12.985908   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:12.985931   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:12.986111   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:19:12.986286   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:12.986445   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:12.986599   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:19:12.986785   59595 main.go:141] libmachine: Using SSH client type: native
	I0814 17:19:12.986981   59595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0814 17:19:12.986993   59595 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 17:19:13.086593   59595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:19:13.086615   59595 main.go:141] libmachine: Detecting the provisioner...
	I0814 17:19:13.086623   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:19:13.089375   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.089755   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:13.089785   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.089964   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:19:13.090148   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:13.090272   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:13.090354   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:19:13.090509   59595 main.go:141] libmachine: Using SSH client type: native
	I0814 17:19:13.090676   59595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0814 17:19:13.090687   59595 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 17:19:13.193107   59595 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 17:19:13.193204   59595 main.go:141] libmachine: found compatible host: buildroot
	I0814 17:19:13.193220   59595 main.go:141] libmachine: Provisioning with buildroot...
	I0814 17:19:13.193233   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetMachineName
	I0814 17:19:13.193517   59595 buildroot.go:166] provisioning hostname "kubernetes-upgrade-422555"
	I0814 17:19:13.193549   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetMachineName
	I0814 17:19:13.193815   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:19:13.196536   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.196866   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:13.196905   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.197086   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:19:13.197288   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:13.197459   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:13.197616   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:19:13.197788   59595 main.go:141] libmachine: Using SSH client type: native
	I0814 17:19:13.198056   59595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0814 17:19:13.198084   59595 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-422555 && echo "kubernetes-upgrade-422555" | sudo tee /etc/hostname
	I0814 17:19:13.314564   59595 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-422555
	
	I0814 17:19:13.314593   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:19:13.317506   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.317910   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:13.317941   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.318155   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:19:13.318352   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:13.318558   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:13.318733   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:19:13.318941   59595 main.go:141] libmachine: Using SSH client type: native
	I0814 17:19:13.319172   59595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0814 17:19:13.319194   59595 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-422555' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-422555/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-422555' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:19:13.427429   59595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:19:13.427458   59595 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:19:13.427499   59595 buildroot.go:174] setting up certificates
	I0814 17:19:13.427509   59595 provision.go:84] configureAuth start
	I0814 17:19:13.427518   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetMachineName
	I0814 17:19:13.427799   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetIP
	I0814 17:19:13.430463   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.430837   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:13.430858   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.431020   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:19:13.433409   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.433786   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:13.433814   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.433967   59595 provision.go:143] copyHostCerts
	I0814 17:19:13.434029   59595 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:19:13.434043   59595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:19:13.434192   59595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:19:13.434346   59595 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:19:13.434359   59595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:19:13.434389   59595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:19:13.434480   59595 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:19:13.434491   59595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:19:13.434517   59595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:19:13.434610   59595 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-422555 san=[127.0.0.1 192.168.72.9 kubernetes-upgrade-422555 localhost minikube]
	I0814 17:19:13.539151   59595 provision.go:177] copyRemoteCerts
	I0814 17:19:13.539226   59595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:19:13.539259   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:19:13.542430   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.542771   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:13.542804   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.543018   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:19:13.543274   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:13.543461   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:19:13.543613   59595 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/id_rsa Username:docker}
	I0814 17:19:13.622162   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:19:13.645620   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0814 17:19:13.672856   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:19:13.703207   59595 provision.go:87] duration metric: took 275.684445ms to configureAuth
	I0814 17:19:13.703237   59595 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:19:13.703458   59595 config.go:182] Loaded profile config "kubernetes-upgrade-422555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:19:13.703555   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:19:13.706785   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.707123   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:13.707155   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.707366   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:19:13.707614   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:13.707811   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:13.707948   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:19:13.708109   59595 main.go:141] libmachine: Using SSH client type: native
	I0814 17:19:13.708335   59595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0814 17:19:13.708365   59595 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:19:13.988193   59595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:19:13.988225   59595 main.go:141] libmachine: Checking connection to Docker...
	I0814 17:19:13.988238   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetURL
	I0814 17:19:13.989528   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | Using libvirt version 6000000
	I0814 17:19:13.992261   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.992643   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:13.992671   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.992860   59595 main.go:141] libmachine: Docker is up and running!
	I0814 17:19:13.992876   59595 main.go:141] libmachine: Reticulating splines...
	I0814 17:19:13.992882   59595 client.go:171] duration metric: took 24.431195481s to LocalClient.Create
	I0814 17:19:13.992906   59595 start.go:167] duration metric: took 24.431266253s to libmachine.API.Create "kubernetes-upgrade-422555"
	I0814 17:19:13.992915   59595 start.go:293] postStartSetup for "kubernetes-upgrade-422555" (driver="kvm2")
	I0814 17:19:13.992938   59595 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:19:13.992959   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:19:13.993215   59595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:19:13.993245   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:19:13.995856   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.996255   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:13.996280   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:13.996465   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:19:13.996658   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:13.996868   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:19:13.997016   59595 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/id_rsa Username:docker}
	I0814 17:19:14.078464   59595 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:19:14.082725   59595 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:19:14.082759   59595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:19:14.082832   59595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:19:14.082947   59595 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:19:14.083101   59595 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:19:14.093136   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:19:14.121570   59595 start.go:296] duration metric: took 128.63905ms for postStartSetup
	I0814 17:19:14.121622   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetConfigRaw
	I0814 17:19:14.122226   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetIP
	I0814 17:19:14.125507   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:14.125947   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:14.125980   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:14.126447   59595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/config.json ...
	I0814 17:19:14.126710   59595 start.go:128] duration metric: took 24.590026467s to createHost
	I0814 17:19:14.126744   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:19:14.129434   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:14.129936   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:14.129980   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:14.130108   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:19:14.130344   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:14.130491   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:14.130670   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:19:14.130836   59595 main.go:141] libmachine: Using SSH client type: native
	I0814 17:19:14.131085   59595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0814 17:19:14.131102   59595 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0814 17:19:14.240232   59595 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723655954.214789283
	
	I0814 17:19:14.240261   59595 fix.go:216] guest clock: 1723655954.214789283
	I0814 17:19:14.240270   59595 fix.go:229] Guest: 2024-08-14 17:19:14.214789283 +0000 UTC Remote: 2024-08-14 17:19:14.126727871 +0000 UTC m=+72.227962721 (delta=88.061412ms)
	I0814 17:19:14.240297   59595 fix.go:200] guest clock delta is within tolerance: 88.061412ms
	I0814 17:19:14.240304   59595 start.go:83] releasing machines lock for "kubernetes-upgrade-422555", held for 24.703766948s
	I0814 17:19:14.240338   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:19:14.240641   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetIP
	I0814 17:19:14.244102   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:14.244552   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:14.244583   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:14.244809   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:19:14.245374   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:19:14.245573   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:19:14.245668   59595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:19:14.245722   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:19:14.246033   59595 ssh_runner.go:195] Run: cat /version.json
	I0814 17:19:14.246056   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:19:14.248780   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:14.249070   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:14.249165   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:14.249208   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:14.249417   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:19:14.249539   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:14.249597   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:14.249604   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:14.249749   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:19:14.249897   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:19:14.249990   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:19:14.250137   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:19:14.250177   59595 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/id_rsa Username:docker}
	I0814 17:19:14.250288   59595 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/id_rsa Username:docker}
	I0814 17:19:14.329078   59595 ssh_runner.go:195] Run: systemctl --version
	I0814 17:19:14.371422   59595 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:19:14.532370   59595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:19:14.538329   59595 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:19:14.538400   59595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:19:14.555154   59595 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:19:14.555188   59595 start.go:495] detecting cgroup driver to use...
	I0814 17:19:14.555261   59595 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:19:14.571682   59595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:19:14.585945   59595 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:19:14.586019   59595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:19:14.602059   59595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:19:14.616637   59595 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:19:14.744089   59595 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:19:14.916865   59595 docker.go:233] disabling docker service ...
	I0814 17:19:14.916948   59595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:19:14.935502   59595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:19:14.949394   59595 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:19:15.095640   59595 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:19:15.231406   59595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:19:15.248046   59595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:19:15.265522   59595 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 17:19:15.265598   59595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:19:15.276527   59595 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:19:15.276601   59595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:19:15.287234   59595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:19:15.298859   59595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:19:15.310290   59595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:19:15.321957   59595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:19:15.332316   59595 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:19:15.332384   59595 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:19:15.346967   59595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:19:15.356408   59595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:19:15.470976   59595 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:19:15.603261   59595 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:19:15.603356   59595 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:19:15.607760   59595 start.go:563] Will wait 60s for crictl version
	I0814 17:19:15.607817   59595 ssh_runner.go:195] Run: which crictl
	I0814 17:19:15.611374   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:19:15.652668   59595 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:19:15.652781   59595 ssh_runner.go:195] Run: crio --version
	I0814 17:19:15.680142   59595 ssh_runner.go:195] Run: crio --version
	I0814 17:19:15.709090   59595 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 17:19:15.710350   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetIP
	I0814 17:19:15.713083   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:15.713379   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:19:04 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:19:15.713418   59595 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:19:15.713629   59595 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:19:15.718500   59595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:19:15.734418   59595 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-422555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-422555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:19:15.734550   59595 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:19:15.734606   59595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:19:15.776917   59595 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:19:15.776980   59595 ssh_runner.go:195] Run: which lz4
	I0814 17:19:15.781944   59595 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0814 17:19:15.787074   59595 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:19:15.787150   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 17:19:17.228906   59595 crio.go:462] duration metric: took 1.446999078s to copy over tarball
	I0814 17:19:17.228991   59595 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:19:19.665905   59595 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.436886338s)
	I0814 17:19:19.665934   59595 crio.go:469] duration metric: took 2.436995114s to extract the tarball
	I0814 17:19:19.665946   59595 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:19:19.707694   59595 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:19:19.752700   59595 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:19:19.752724   59595 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:19:19.752799   59595 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:19:19.752820   59595 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 17:19:19.752799   59595 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:19:19.752842   59595 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:19:19.752863   59595 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:19:19.752866   59595 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 17:19:19.752955   59595 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:19:19.752909   59595 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:19:19.754229   59595 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 17:19:19.754292   59595 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:19:19.754299   59595 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 17:19:19.754299   59595 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:19:19.754334   59595 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:19:19.754355   59595 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:19:19.754362   59595 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:19:19.754364   59595 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:19:20.027596   59595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 17:19:20.068742   59595 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 17:19:20.068775   59595 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 17:19:20.068815   59595 ssh_runner.go:195] Run: which crictl
	I0814 17:19:20.072441   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:19:20.109860   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:19:20.133038   59595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:19:20.133230   59595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:19:20.137454   59595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:19:20.139436   59595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 17:19:20.159579   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:19:20.159802   59595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:19:20.161821   59595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 17:19:20.301842   59595 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 17:19:20.301858   59595 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 17:19:20.301890   59595 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:19:20.301894   59595 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:19:20.301944   59595 ssh_runner.go:195] Run: which crictl
	I0814 17:19:20.301944   59595 ssh_runner.go:195] Run: which crictl
	I0814 17:19:20.341103   59595 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 17:19:20.341144   59595 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:19:20.341159   59595 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 17:19:20.341197   59595 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 17:19:20.341204   59595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 17:19:20.341216   59595 ssh_runner.go:195] Run: which crictl
	I0814 17:19:20.341239   59595 ssh_runner.go:195] Run: which crictl
	I0814 17:19:20.357471   59595 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 17:19:20.357513   59595 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:19:20.357563   59595 ssh_runner.go:195] Run: which crictl
	I0814 17:19:20.407693   59595 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 17:19:20.407750   59595 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:19:20.407770   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:19:20.407791   59595 ssh_runner.go:195] Run: which crictl
	I0814 17:19:20.407806   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:19:20.407885   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:19:20.407910   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:19:20.407916   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:19:20.510976   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:19:20.510993   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:19:20.511096   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:19:20.511152   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:19:20.511179   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:19:20.511198   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:19:20.609300   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:19:20.639511   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:19:20.639531   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:19:20.639557   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:19:20.639645   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:19:20.639659   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:19:20.652115   59595 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:19:20.745352   59595 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:19:20.792211   59595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 17:19:20.805449   59595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 17:19:20.805448   59595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 17:19:20.805520   59595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 17:19:20.805526   59595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 17:19:20.912021   59595 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 17:19:20.912102   59595 cache_images.go:92] duration metric: took 1.159364772s to LoadCachedImages
	W0814 17:19:20.912179   59595 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 17:19:20.912194   59595 kubeadm.go:934] updating node { 192.168.72.9 8443 v1.20.0 crio true true} ...
	I0814 17:19:20.912312   59595 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-422555 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-422555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:19:20.912392   59595 ssh_runner.go:195] Run: crio config
	I0814 17:19:20.985500   59595 cni.go:84] Creating CNI manager for ""
	I0814 17:19:20.985528   59595 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:19:20.985543   59595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:19:20.985567   59595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.9 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-422555 NodeName:kubernetes-upgrade-422555 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 17:19:20.985760   59595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-422555"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:19:20.985831   59595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 17:19:20.996729   59595 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:19:20.996822   59595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:19:21.007302   59595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (431 bytes)
	I0814 17:19:21.023979   59595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:19:21.040046   59595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0814 17:19:21.063700   59595 ssh_runner.go:195] Run: grep 192.168.72.9	control-plane.minikube.internal$ /etc/hosts
	I0814 17:19:21.069971   59595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:19:21.083582   59595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:19:21.222157   59595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:19:21.242235   59595 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555 for IP: 192.168.72.9
	I0814 17:19:21.242275   59595 certs.go:194] generating shared ca certs ...
	I0814 17:19:21.242295   59595 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:19:21.242487   59595 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:19:21.242553   59595 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:19:21.242570   59595 certs.go:256] generating profile certs ...
	I0814 17:19:21.242655   59595 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/client.key
	I0814 17:19:21.242684   59595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/client.crt with IP's: []
	I0814 17:19:21.654172   59595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/client.crt ...
	I0814 17:19:21.654210   59595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/client.crt: {Name:mkcd58daed28cba4c747db58137f289a4d020f06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:19:21.654395   59595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/client.key ...
	I0814 17:19:21.654412   59595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/client.key: {Name:mk2a8d207b0643d7829022a32cf692df54599644 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:19:21.654501   59595 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.key.4b2808ac
	I0814 17:19:21.654520   59595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.crt.4b2808ac with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.9]
	I0814 17:19:21.919946   59595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.crt.4b2808ac ...
	I0814 17:19:21.919983   59595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.crt.4b2808ac: {Name:mkfd6006249e2cd561bf50f7052c5aadfa17bd7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:19:21.926819   59595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.key.4b2808ac ...
	I0814 17:19:21.926856   59595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.key.4b2808ac: {Name:mk0d6aa13612fa30649d0f46786dc75249becb1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:19:21.926990   59595 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.crt.4b2808ac -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.crt
	I0814 17:19:21.927125   59595 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.key.4b2808ac -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.key
	I0814 17:19:21.927213   59595 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/proxy-client.key
	I0814 17:19:21.927236   59595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/proxy-client.crt with IP's: []
	I0814 17:19:21.985015   59595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/proxy-client.crt ...
	I0814 17:19:21.985046   59595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/proxy-client.crt: {Name:mkb45cfc541673d56eecb0e4fa1a1a7903a87585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:19:21.985192   59595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/proxy-client.key ...
	I0814 17:19:21.985204   59595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/proxy-client.key: {Name:mkd38f1cfd2c010ad20834bcddf43ec65238c950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:19:21.985372   59595 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:19:21.985410   59595 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:19:21.985420   59595 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:19:21.985443   59595 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:19:21.985466   59595 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:19:21.985490   59595 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:19:21.985527   59595 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:19:21.986054   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:19:22.012945   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:19:22.037070   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:19:22.059879   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:19:22.084689   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0814 17:19:22.110067   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:19:22.133478   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:19:22.238786   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 17:19:22.262042   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:19:22.286196   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:19:22.312804   59595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:19:22.335454   59595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:19:22.350876   59595 ssh_runner.go:195] Run: openssl version
	I0814 17:19:22.356288   59595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:19:22.365962   59595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:19:22.370292   59595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:19:22.370346   59595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:19:22.375712   59595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:19:22.385681   59595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:19:22.396427   59595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:19:22.401365   59595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:19:22.401422   59595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:19:22.407460   59595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:19:22.417738   59595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:19:22.427899   59595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:19:22.432100   59595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:19:22.432159   59595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:19:22.437645   59595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:19:22.447894   59595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:19:22.452580   59595 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 17:19:22.452645   59595 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-422555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-422555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:19:22.452724   59595 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:19:22.452779   59595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:19:22.496914   59595 cri.go:89] found id: ""
	I0814 17:19:22.496993   59595 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:19:22.506848   59595 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:19:22.516625   59595 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:19:22.526003   59595 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:19:22.526026   59595 kubeadm.go:157] found existing configuration files:
	
	I0814 17:19:22.526110   59595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:19:22.535158   59595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:19:22.535224   59595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:19:22.544483   59595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:19:22.553511   59595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:19:22.553571   59595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:19:22.563511   59595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:19:22.572711   59595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:19:22.572768   59595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:19:22.581966   59595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:19:22.591283   59595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:19:22.591374   59595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:19:22.601247   59595 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:19:22.722049   59595 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:19:22.722159   59595 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:19:22.863525   59595 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:19:22.863702   59595 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:19:22.863858   59595 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:19:23.081696   59595 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:19:23.296305   59595 out.go:204]   - Generating certificates and keys ...
	I0814 17:19:23.296442   59595 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:19:23.296511   59595 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:19:23.371012   59595 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 17:19:23.446825   59595 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 17:19:23.541187   59595 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 17:19:23.616707   59595 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 17:19:23.737872   59595 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 17:19:23.738078   59595 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-422555 localhost] and IPs [192.168.72.9 127.0.0.1 ::1]
	I0814 17:19:23.958191   59595 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 17:19:23.958616   59595 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-422555 localhost] and IPs [192.168.72.9 127.0.0.1 ::1]
	I0814 17:19:24.096641   59595 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 17:19:24.157310   59595 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 17:19:24.318531   59595 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 17:19:24.318884   59595 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:19:24.560626   59595 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:19:24.918015   59595 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:19:25.112204   59595 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:19:25.205583   59595 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:19:25.231816   59595 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:19:25.233017   59595 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:19:25.233090   59595 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:19:25.385546   59595 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:19:25.387308   59595 out.go:204]   - Booting up control plane ...
	I0814 17:19:25.387452   59595 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:19:25.402333   59595 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:19:25.402463   59595 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:19:25.402595   59595 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:19:25.410149   59595 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:20:05.403982   59595 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:20:05.404446   59595 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:20:05.404653   59595 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:20:10.405129   59595 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:20:10.405444   59595 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:20:20.404130   59595 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:20:20.404404   59595 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:20:40.403640   59595 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:20:40.403856   59595 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:21:20.405246   59595 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:21:20.405454   59595 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:21:20.405468   59595 kubeadm.go:310] 
	I0814 17:21:20.405518   59595 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:21:20.405563   59595 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:21:20.405571   59595 kubeadm.go:310] 
	I0814 17:21:20.405635   59595 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:21:20.405675   59595 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:21:20.405824   59595 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:21:20.405832   59595 kubeadm.go:310] 
	I0814 17:21:20.405965   59595 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:21:20.406045   59595 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:21:20.406107   59595 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:21:20.406129   59595 kubeadm.go:310] 
	I0814 17:21:20.406212   59595 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:21:20.406277   59595 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:21:20.406284   59595 kubeadm.go:310] 
	I0814 17:21:20.406383   59595 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:21:20.406457   59595 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:21:20.406546   59595 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:21:20.406664   59595 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:21:20.406680   59595 kubeadm.go:310] 
	I0814 17:21:20.406806   59595 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:21:20.406921   59595 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:21:20.407010   59595 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 17:21:20.407100   59595 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-422555 localhost] and IPs [192.168.72.9 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-422555 localhost] and IPs [192.168.72.9 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-422555 localhost] and IPs [192.168.72.9 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-422555 localhost] and IPs [192.168.72.9 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 17:21:20.407144   59595 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:21:20.855369   59595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:21:20.870291   59595 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:21:20.879786   59595 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:21:20.879815   59595 kubeadm.go:157] found existing configuration files:
	
	I0814 17:21:20.879867   59595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:21:20.888909   59595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:21:20.888974   59595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:21:20.898051   59595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:21:20.906750   59595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:21:20.906806   59595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:21:20.915812   59595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:21:20.924376   59595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:21:20.924444   59595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:21:20.933456   59595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:21:20.942673   59595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:21:20.942740   59595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:21:20.951647   59595 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:21:21.150387   59595 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:23:17.248276   59595 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:23:17.248403   59595 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 17:23:17.250444   59595 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:23:17.250519   59595 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:23:17.250631   59595 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:23:17.250772   59595 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:23:17.250913   59595 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:23:17.251011   59595 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:23:17.417021   59595 out.go:204]   - Generating certificates and keys ...
	I0814 17:23:17.417163   59595 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:23:17.417276   59595 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:23:17.417383   59595 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:23:17.417464   59595 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:23:17.417548   59595 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:23:17.417606   59595 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:23:17.417676   59595 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:23:17.417740   59595 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:23:17.417847   59595 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:23:17.417947   59595 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:23:17.418007   59595 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:23:17.418086   59595 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:23:17.418162   59595 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:23:17.418233   59595 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:23:17.418318   59595 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:23:17.418397   59595 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:23:17.418517   59595 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:23:17.418640   59595 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:23:17.418695   59595 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:23:17.418780   59595 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:23:17.660207   59595 out.go:204]   - Booting up control plane ...
	I0814 17:23:17.660345   59595 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:23:17.660447   59595 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:23:17.660573   59595 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:23:17.660714   59595 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:23:17.660928   59595 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:23:17.660997   59595 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:23:17.661085   59595 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:23:17.661292   59595 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:23:17.661377   59595 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:23:17.661595   59595 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:23:17.661686   59595 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:23:17.661914   59595 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:23:17.662006   59595 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:23:17.662225   59595 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:23:17.662465   59595 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:23:17.662694   59595 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:23:17.662704   59595 kubeadm.go:310] 
	I0814 17:23:17.662751   59595 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:23:17.662804   59595 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:23:17.662814   59595 kubeadm.go:310] 
	I0814 17:23:17.662860   59595 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:23:17.662902   59595 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:23:17.663080   59595 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:23:17.663106   59595 kubeadm.go:310] 
	I0814 17:23:17.663241   59595 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:23:17.663283   59595 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:23:17.663369   59595 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:23:17.663382   59595 kubeadm.go:310] 
	I0814 17:23:17.663507   59595 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:23:17.663615   59595 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:23:17.663644   59595 kubeadm.go:310] 
	I0814 17:23:17.663810   59595 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:23:17.663931   59595 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:23:17.664035   59595 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:23:17.664131   59595 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:23:17.664216   59595 kubeadm.go:394] duration metric: took 3m55.21157627s to StartCluster
	I0814 17:23:17.664264   59595 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:23:17.664341   59595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:23:17.664440   59595 kubeadm.go:310] 
	I0814 17:23:17.706687   59595 cri.go:89] found id: ""
	I0814 17:23:17.706712   59595 logs.go:276] 0 containers: []
	W0814 17:23:17.706721   59595 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:23:17.706730   59595 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:23:17.706801   59595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:23:17.743862   59595 cri.go:89] found id: ""
	I0814 17:23:17.743890   59595 logs.go:276] 0 containers: []
	W0814 17:23:17.743900   59595 logs.go:278] No container was found matching "etcd"
	I0814 17:23:17.743907   59595 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:23:17.743970   59595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:23:17.786381   59595 cri.go:89] found id: ""
	I0814 17:23:17.786412   59595 logs.go:276] 0 containers: []
	W0814 17:23:17.786423   59595 logs.go:278] No container was found matching "coredns"
	I0814 17:23:17.786430   59595 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:23:17.786499   59595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:23:17.818849   59595 cri.go:89] found id: ""
	I0814 17:23:17.818884   59595 logs.go:276] 0 containers: []
	W0814 17:23:17.818894   59595 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:23:17.818903   59595 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:23:17.818970   59595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:23:17.851026   59595 cri.go:89] found id: ""
	I0814 17:23:17.851055   59595 logs.go:276] 0 containers: []
	W0814 17:23:17.851066   59595 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:23:17.851074   59595 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:23:17.851138   59595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:23:17.884639   59595 cri.go:89] found id: ""
	I0814 17:23:17.884665   59595 logs.go:276] 0 containers: []
	W0814 17:23:17.884672   59595 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:23:17.884678   59595 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:23:17.884739   59595 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:23:17.917856   59595 cri.go:89] found id: ""
	I0814 17:23:17.917887   59595 logs.go:276] 0 containers: []
	W0814 17:23:17.917900   59595 logs.go:278] No container was found matching "kindnet"
	I0814 17:23:17.917912   59595 logs.go:123] Gathering logs for dmesg ...
	I0814 17:23:17.917928   59595 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:23:17.932389   59595 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:23:17.932424   59595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:23:18.098890   59595 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:23:18.098912   59595 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:23:18.098929   59595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:23:18.209711   59595 logs.go:123] Gathering logs for container status ...
	I0814 17:23:18.209757   59595 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:23:18.250134   59595 logs.go:123] Gathering logs for kubelet ...
	I0814 17:23:18.250161   59595 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0814 17:23:18.321765   59595 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 17:23:18.321817   59595 out.go:239] * 
	* 
	W0814 17:23:18.321871   59595 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:23:18.321890   59595 out.go:239] * 
	* 
	W0814 17:23:18.322751   59595 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 17:23:18.421671   59595 out.go:177] 
	W0814 17:23:18.558681   59595 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:23:18.558749   59595 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 17:23:18.558781   59595 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 17:23:18.698895   59595 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-422555 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-422555
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-422555: (5.621824137s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-422555 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-422555 status --format={{.Host}}: exit status 7 (75.157484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-422555 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-422555 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.894842753s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-422555 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-422555 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-422555 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (86.260523ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-422555] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-422555
	    minikube start -p kubernetes-upgrade-422555 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4225552 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-422555 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-422555 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-422555 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.008418102s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-14 17:24:40.636962272 +0000 UTC m=+4514.322245079
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-422555 -n kubernetes-upgrade-422555
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-422555 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-422555 logs -n 25: (1.722042118s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |   Profile   |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-984053 sudo cat                              | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | /etc/nsswitch.conf                                   |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo cat                              | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | /etc/hosts                                           |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo cat                              | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | /etc/resolv.conf                                     |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo crictl                           | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | pods                                                 |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo crictl ps                        | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | --all                                                |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo find                             | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | /etc/cni -type f -exec sh -c                         |             |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo ip a s                           | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	| ssh     | -p auto-984053 sudo ip r s                           | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	| ssh     | -p auto-984053 sudo                                  | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | iptables-save                                        |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo iptables                         | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | -t nat -L -n -v                                      |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo systemctl                        | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | status kubelet --all --full                          |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo systemctl                        | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | cat kubelet --no-pager                               |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo journalctl                       | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | -xeu kubelet --all --full                            |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo cat                              | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo cat                              | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | /var/lib/kubelet/config.yaml                         |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo systemctl                        | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC |                     |
	|         | status docker --all --full                           |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo systemctl                        | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | cat docker --no-pager                                |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo cat                              | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | /etc/docker/daemon.json                              |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo docker                           | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC |                     |
	|         | system info                                          |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo systemctl                        | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC |                     |
	|         | status cri-docker --all --full                       |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo systemctl                        | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | cat cri-docker --no-pager                            |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo cat                              | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo cat                              | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo                                  | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC | 14 Aug 24 17:24 UTC |
	|         | cri-dockerd --version                                |             |         |         |                     |                     |
	| ssh     | -p auto-984053 sudo systemctl                        | auto-984053 | jenkins | v1.33.1 | 14 Aug 24 17:24 UTC |                     |
	|         | status containerd --all --full                       |             |         |         |                     |                     |
	|         | --no-pager                                           |             |         |         |                     |                     |
	|---------|------------------------------------------------------|-------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 17:24:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 17:24:04.669746   65068 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:24:04.669878   65068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:24:04.669888   65068 out.go:304] Setting ErrFile to fd 2...
	I0814 17:24:04.669893   65068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:24:04.670113   65068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:24:04.670671   65068 out.go:298] Setting JSON to false
	I0814 17:24:04.671711   65068 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7589,"bootTime":1723648656,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:24:04.671771   65068 start.go:139] virtualization: kvm guest
	I0814 17:24:04.673960   65068 out.go:177] * [kubernetes-upgrade-422555] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:24:04.675774   65068 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:24:04.675811   65068 notify.go:220] Checking for updates...
	I0814 17:24:04.677960   65068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:24:04.679243   65068 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:24:04.680534   65068 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:24:04.681890   65068 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:24:04.683190   65068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:24:04.684810   65068 config.go:182] Loaded profile config "kubernetes-upgrade-422555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:24:04.685265   65068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:24:04.685322   65068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:24:04.703048   65068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45889
	I0814 17:24:04.703543   65068 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:24:04.704114   65068 main.go:141] libmachine: Using API Version  1
	I0814 17:24:04.704141   65068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:24:04.704493   65068 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:24:04.704667   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:24:04.704885   65068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:24:04.705171   65068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:24:04.705211   65068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:24:04.723562   65068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46299
	I0814 17:24:04.723937   65068 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:24:04.724375   65068 main.go:141] libmachine: Using API Version  1
	I0814 17:24:04.724386   65068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:24:04.724648   65068 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:24:04.724808   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:24:04.763500   65068 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 17:24:04.764767   65068 start.go:297] selected driver: kvm2
	I0814 17:24:04.764779   65068 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-422555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-422555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:24:04.764879   65068 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:24:04.765548   65068 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:24:04.765643   65068 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:24:04.784590   65068 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:24:04.785124   65068 cni.go:84] Creating CNI manager for ""
	I0814 17:24:04.785147   65068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:24:04.785193   65068 start.go:340] cluster config:
	{Name:kubernetes-upgrade-422555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-422555 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:24:04.785342   65068 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:24:04.787199   65068 out.go:177] * Starting "kubernetes-upgrade-422555" primary control-plane node in "kubernetes-upgrade-422555" cluster
	I0814 17:24:03.233740   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:03.234306   64848 main.go:141] libmachine: (kindnet-984053) DBG | unable to find current IP address of domain kindnet-984053 in network mk-kindnet-984053
	I0814 17:24:03.234327   64848 main.go:141] libmachine: (kindnet-984053) DBG | I0814 17:24:03.234287   64870 retry.go:31] will retry after 1.924855328s: waiting for machine to come up
	I0814 17:24:05.160377   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:05.160854   64848 main.go:141] libmachine: (kindnet-984053) DBG | unable to find current IP address of domain kindnet-984053 in network mk-kindnet-984053
	I0814 17:24:05.160877   64848 main.go:141] libmachine: (kindnet-984053) DBG | I0814 17:24:05.160822   64870 retry.go:31] will retry after 3.275818696s: waiting for machine to come up
	I0814 17:24:05.218683   63770 pod_ready.go:102] pod "coredns-6f6b679f8f-mzh2t" in "kube-system" namespace has status "Ready":"False"
	I0814 17:24:07.714339   63770 pod_ready.go:102] pod "coredns-6f6b679f8f-mzh2t" in "kube-system" namespace has status "Ready":"False"
	I0814 17:24:04.788643   65068 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:24:04.788706   65068 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:24:04.788714   65068 cache.go:56] Caching tarball of preloaded images
	I0814 17:24:04.788817   65068 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:24:04.788831   65068 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 17:24:04.788914   65068 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/config.json ...
	I0814 17:24:04.789102   65068 start.go:360] acquireMachinesLock for kubernetes-upgrade-422555: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:24:08.439968   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:08.440446   64848 main.go:141] libmachine: (kindnet-984053) DBG | unable to find current IP address of domain kindnet-984053 in network mk-kindnet-984053
	I0814 17:24:08.440469   64848 main.go:141] libmachine: (kindnet-984053) DBG | I0814 17:24:08.440412   64870 retry.go:31] will retry after 4.290112282s: waiting for machine to come up
	I0814 17:24:10.212932   63770 pod_ready.go:102] pod "coredns-6f6b679f8f-mzh2t" in "kube-system" namespace has status "Ready":"False"
	I0814 17:24:12.213519   63770 pod_ready.go:102] pod "coredns-6f6b679f8f-mzh2t" in "kube-system" namespace has status "Ready":"False"
	I0814 17:24:14.713594   63770 pod_ready.go:92] pod "coredns-6f6b679f8f-mzh2t" in "kube-system" namespace has status "Ready":"True"
	I0814 17:24:14.713617   63770 pod_ready.go:81] duration metric: took 42.006259836s for pod "coredns-6f6b679f8f-mzh2t" in "kube-system" namespace to be "Ready" ...
	I0814 17:24:14.713627   63770 pod_ready.go:78] waiting up to 15m0s for pod "coredns-6f6b679f8f-qf498" in "kube-system" namespace to be "Ready" ...
	I0814 17:24:14.715297   63770 pod_ready.go:97] error getting pod "coredns-6f6b679f8f-qf498" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-qf498" not found
	I0814 17:24:14.715316   63770 pod_ready.go:81] duration metric: took 1.683372ms for pod "coredns-6f6b679f8f-qf498" in "kube-system" namespace to be "Ready" ...
	E0814 17:24:14.715344   63770 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-qf498" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-qf498" not found
	I0814 17:24:14.715355   63770 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-984053" in "kube-system" namespace to be "Ready" ...
	I0814 17:24:14.719461   63770 pod_ready.go:92] pod "etcd-auto-984053" in "kube-system" namespace has status "Ready":"True"
	I0814 17:24:14.719477   63770 pod_ready.go:81] duration metric: took 4.115545ms for pod "etcd-auto-984053" in "kube-system" namespace to be "Ready" ...
	I0814 17:24:14.719485   63770 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-984053" in "kube-system" namespace to be "Ready" ...
	I0814 17:24:14.723786   63770 pod_ready.go:92] pod "kube-apiserver-auto-984053" in "kube-system" namespace has status "Ready":"True"
	I0814 17:24:14.723802   63770 pod_ready.go:81] duration metric: took 4.312505ms for pod "kube-apiserver-auto-984053" in "kube-system" namespace to be "Ready" ...
	I0814 17:24:14.723810   63770 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-984053" in "kube-system" namespace to be "Ready" ...
	I0814 17:24:14.728342   63770 pod_ready.go:92] pod "kube-controller-manager-auto-984053" in "kube-system" namespace has status "Ready":"True"
	I0814 17:24:14.728356   63770 pod_ready.go:81] duration metric: took 4.540954ms for pod "kube-controller-manager-auto-984053" in "kube-system" namespace to be "Ready" ...
	I0814 17:24:14.728365   63770 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-dfp6j" in "kube-system" namespace to be "Ready" ...
	I0814 17:24:14.912292   63770 pod_ready.go:92] pod "kube-proxy-dfp6j" in "kube-system" namespace has status "Ready":"True"
	I0814 17:24:14.912320   63770 pod_ready.go:81] duration metric: took 183.94793ms for pod "kube-proxy-dfp6j" in "kube-system" namespace to be "Ready" ...
	I0814 17:24:14.912332   63770 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-984053" in "kube-system" namespace to be "Ready" ...
	I0814 17:24:15.310873   63770 pod_ready.go:92] pod "kube-scheduler-auto-984053" in "kube-system" namespace has status "Ready":"True"
	I0814 17:24:15.310894   63770 pod_ready.go:81] duration metric: took 398.554957ms for pod "kube-scheduler-auto-984053" in "kube-system" namespace to be "Ready" ...
	I0814 17:24:15.310901   63770 pod_ready.go:38] duration metric: took 42.618877022s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:24:15.310914   63770 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:24:15.310960   63770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:24:15.326209   63770 api_server.go:72] duration metric: took 43.526665376s to wait for apiserver process to appear ...
	I0814 17:24:15.326233   63770 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:24:15.326247   63770 api_server.go:253] Checking apiserver healthz at https://192.168.39.37:8443/healthz ...
	I0814 17:24:15.330288   63770 api_server.go:279] https://192.168.39.37:8443/healthz returned 200:
	ok
	I0814 17:24:15.331236   63770 api_server.go:141] control plane version: v1.31.0
	I0814 17:24:15.331255   63770 api_server.go:131] duration metric: took 5.015123ms to wait for apiserver health ...
	I0814 17:24:15.331264   63770 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:24:15.514137   63770 system_pods.go:59] 7 kube-system pods found
	I0814 17:24:15.514165   63770 system_pods.go:61] "coredns-6f6b679f8f-mzh2t" [ab2b1532-e080-4a87-b6db-69f4dccab0f8] Running
	I0814 17:24:15.514170   63770 system_pods.go:61] "etcd-auto-984053" [32d3e0c3-bb76-4908-93c4-946ed1f565fe] Running
	I0814 17:24:15.514174   63770 system_pods.go:61] "kube-apiserver-auto-984053" [02615fb7-39fe-4095-bd65-36e6f4581f56] Running
	I0814 17:24:15.514177   63770 system_pods.go:61] "kube-controller-manager-auto-984053" [7c051997-b7dc-4d80-b7e0-bf627002ce3e] Running
	I0814 17:24:15.514179   63770 system_pods.go:61] "kube-proxy-dfp6j" [1c6db887-d3c1-4321-8cb2-4953539b08a0] Running
	I0814 17:24:15.514182   63770 system_pods.go:61] "kube-scheduler-auto-984053" [88d823d0-e620-4c08-b3af-0130c4a7cd16] Running
	I0814 17:24:15.514185   63770 system_pods.go:61] "storage-provisioner" [c3ca35f9-499b-4614-a6e0-0f1b075d788f] Running
	I0814 17:24:15.514190   63770 system_pods.go:74] duration metric: took 182.920925ms to wait for pod list to return data ...
	I0814 17:24:15.514197   63770 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:24:15.711394   63770 default_sa.go:45] found service account: "default"
	I0814 17:24:15.711422   63770 default_sa.go:55] duration metric: took 197.218153ms for default service account to be created ...
	I0814 17:24:15.711431   63770 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:24:15.913898   63770 system_pods.go:86] 7 kube-system pods found
	I0814 17:24:15.913926   63770 system_pods.go:89] "coredns-6f6b679f8f-mzh2t" [ab2b1532-e080-4a87-b6db-69f4dccab0f8] Running
	I0814 17:24:15.913931   63770 system_pods.go:89] "etcd-auto-984053" [32d3e0c3-bb76-4908-93c4-946ed1f565fe] Running
	I0814 17:24:15.913935   63770 system_pods.go:89] "kube-apiserver-auto-984053" [02615fb7-39fe-4095-bd65-36e6f4581f56] Running
	I0814 17:24:15.913939   63770 system_pods.go:89] "kube-controller-manager-auto-984053" [7c051997-b7dc-4d80-b7e0-bf627002ce3e] Running
	I0814 17:24:15.913942   63770 system_pods.go:89] "kube-proxy-dfp6j" [1c6db887-d3c1-4321-8cb2-4953539b08a0] Running
	I0814 17:24:15.913946   63770 system_pods.go:89] "kube-scheduler-auto-984053" [88d823d0-e620-4c08-b3af-0130c4a7cd16] Running
	I0814 17:24:15.913950   63770 system_pods.go:89] "storage-provisioner" [c3ca35f9-499b-4614-a6e0-0f1b075d788f] Running
	I0814 17:24:15.913956   63770 system_pods.go:126] duration metric: took 202.520061ms to wait for k8s-apps to be running ...
	I0814 17:24:15.913963   63770 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:24:15.914017   63770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:24:15.929142   63770 system_svc.go:56] duration metric: took 15.169226ms WaitForService to wait for kubelet
	I0814 17:24:15.929174   63770 kubeadm.go:582] duration metric: took 44.129632453s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:24:15.929199   63770 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:24:16.111847   63770 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:24:16.111880   63770 node_conditions.go:123] node cpu capacity is 2
	I0814 17:24:16.111891   63770 node_conditions.go:105] duration metric: took 182.68745ms to run NodePressure ...
	I0814 17:24:16.111902   63770 start.go:241] waiting for startup goroutines ...
	I0814 17:24:16.111908   63770 start.go:246] waiting for cluster config update ...
	I0814 17:24:16.111919   63770 start.go:255] writing updated cluster config ...
	I0814 17:24:16.112200   63770 ssh_runner.go:195] Run: rm -f paused
	I0814 17:24:16.158903   63770 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:24:16.160994   63770 out.go:177] * Done! kubectl is now configured to use "auto-984053" cluster and "default" namespace by default
	I0814 17:24:12.732336   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:12.732851   64848 main.go:141] libmachine: (kindnet-984053) DBG | unable to find current IP address of domain kindnet-984053 in network mk-kindnet-984053
	I0814 17:24:12.732875   64848 main.go:141] libmachine: (kindnet-984053) DBG | I0814 17:24:12.732818   64870 retry.go:31] will retry after 3.741619464s: waiting for machine to come up
	I0814 17:24:16.476535   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:16.477123   64848 main.go:141] libmachine: (kindnet-984053) Found IP for machine: 192.168.61.31
	I0814 17:24:16.477150   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has current primary IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:16.477160   64848 main.go:141] libmachine: (kindnet-984053) Reserving static IP address...
	I0814 17:24:16.477623   64848 main.go:141] libmachine: (kindnet-984053) DBG | unable to find host DHCP lease matching {name: "kindnet-984053", mac: "52:54:00:f8:3a:65", ip: "192.168.61.31"} in network mk-kindnet-984053
	I0814 17:24:16.572815   64848 main.go:141] libmachine: (kindnet-984053) Reserved static IP address: 192.168.61.31
	I0814 17:24:16.572845   64848 main.go:141] libmachine: (kindnet-984053) Waiting for SSH to be available...
	I0814 17:24:16.572855   64848 main.go:141] libmachine: (kindnet-984053) DBG | Getting to WaitForSSH function...
	I0814 17:24:16.575998   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:16.576466   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:16.576496   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:16.576671   64848 main.go:141] libmachine: (kindnet-984053) DBG | Using SSH client type: external
	I0814 17:24:16.576706   64848 main.go:141] libmachine: (kindnet-984053) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/kindnet-984053/id_rsa (-rw-------)
	I0814 17:24:16.576741   64848 main.go:141] libmachine: (kindnet-984053) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/kindnet-984053/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:24:16.576757   64848 main.go:141] libmachine: (kindnet-984053) DBG | About to run SSH command:
	I0814 17:24:16.576776   64848 main.go:141] libmachine: (kindnet-984053) DBG | exit 0
	I0814 17:24:16.707195   64848 main.go:141] libmachine: (kindnet-984053) DBG | SSH cmd err, output: <nil>: 
	I0814 17:24:16.707459   64848 main.go:141] libmachine: (kindnet-984053) KVM machine creation complete!
	I0814 17:24:16.707824   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetConfigRaw
	I0814 17:24:16.708429   64848 main.go:141] libmachine: (kindnet-984053) Calling .DriverName
	I0814 17:24:16.708632   64848 main.go:141] libmachine: (kindnet-984053) Calling .DriverName
	I0814 17:24:16.708831   64848 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 17:24:16.708847   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetState
	I0814 17:24:16.710153   64848 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 17:24:16.710171   64848 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 17:24:16.710179   64848 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 17:24:16.710187   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHHostname
	I0814 17:24:16.712632   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:16.713075   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:16.713114   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:16.713259   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHPort
	I0814 17:24:16.713442   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:16.713640   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:16.713858   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHUsername
	I0814 17:24:16.714011   64848 main.go:141] libmachine: Using SSH client type: native
	I0814 17:24:16.714216   64848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0814 17:24:16.714229   64848 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 17:24:16.826559   64848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:24:16.826581   64848 main.go:141] libmachine: Detecting the provisioner...
	I0814 17:24:16.826589   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHHostname
	I0814 17:24:16.829727   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:16.830158   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:16.830185   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:16.830366   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHPort
	I0814 17:24:16.830571   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:16.830774   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:16.830978   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHUsername
	I0814 17:24:16.831171   64848 main.go:141] libmachine: Using SSH client type: native
	I0814 17:24:16.831399   64848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0814 17:24:16.831418   64848 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 17:24:16.948013   64848 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 17:24:16.948114   64848 main.go:141] libmachine: found compatible host: buildroot
	I0814 17:24:16.948128   64848 main.go:141] libmachine: Provisioning with buildroot...
	I0814 17:24:16.948139   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetMachineName
	I0814 17:24:16.948427   64848 buildroot.go:166] provisioning hostname "kindnet-984053"
	I0814 17:24:16.948457   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetMachineName
	I0814 17:24:16.948676   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHHostname
	I0814 17:24:16.951662   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:16.952056   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:16.952094   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:16.952350   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHPort
	I0814 17:24:16.952527   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:16.952689   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:16.952840   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHUsername
	I0814 17:24:16.953037   64848 main.go:141] libmachine: Using SSH client type: native
	I0814 17:24:16.953245   64848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0814 17:24:16.953263   64848 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-984053 && echo "kindnet-984053" | sudo tee /etc/hostname
	I0814 17:24:17.077509   64848 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-984053
	
	I0814 17:24:17.077561   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHHostname
	I0814 17:24:17.080589   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.080985   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:17.081026   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.081240   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHPort
	I0814 17:24:17.081446   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:17.081620   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:17.081754   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHUsername
	I0814 17:24:17.081956   64848 main.go:141] libmachine: Using SSH client type: native
	I0814 17:24:17.082174   64848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0814 17:24:17.082200   64848 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-984053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-984053/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-984053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:24:17.200223   64848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:24:17.200253   64848 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:24:17.200302   64848 buildroot.go:174] setting up certificates
	I0814 17:24:17.200317   64848 provision.go:84] configureAuth start
	I0814 17:24:17.200335   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetMachineName
	I0814 17:24:17.200615   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetIP
	I0814 17:24:17.203134   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.203574   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:17.203605   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.203789   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHHostname
	I0814 17:24:17.206432   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.206765   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:17.206800   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.206920   64848 provision.go:143] copyHostCerts
	I0814 17:24:17.207001   64848 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:24:17.207012   64848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:24:17.207078   64848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:24:17.207163   64848 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:24:17.207171   64848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:24:17.207200   64848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:24:17.207249   64848 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:24:17.207255   64848 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:24:17.207276   64848 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:24:17.207345   64848 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.kindnet-984053 san=[127.0.0.1 192.168.61.31 kindnet-984053 localhost minikube]
	I0814 17:24:18.196337   65068 start.go:364] duration metric: took 13.407210452s to acquireMachinesLock for "kubernetes-upgrade-422555"
	I0814 17:24:18.196380   65068 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:24:18.196391   65068 fix.go:54] fixHost starting: 
	I0814 17:24:18.196827   65068 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:24:18.196882   65068 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:24:18.217510   65068 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42763
	I0814 17:24:18.217972   65068 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:24:18.218619   65068 main.go:141] libmachine: Using API Version  1
	I0814 17:24:18.218654   65068 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:24:18.219010   65068 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:24:18.219179   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:24:18.219479   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetState
	I0814 17:24:18.221440   65068 fix.go:112] recreateIfNeeded on kubernetes-upgrade-422555: state=Running err=<nil>
	W0814 17:24:18.221475   65068 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:24:18.223359   65068 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-422555" VM ...
	I0814 17:24:18.224699   65068 machine.go:94] provisionDockerMachine start ...
	I0814 17:24:18.224727   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:24:18.224935   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:24:18.227819   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:18.228239   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:18.228264   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:18.228446   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:24:18.228624   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:18.228812   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:18.228953   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:24:18.229120   65068 main.go:141] libmachine: Using SSH client type: native
	I0814 17:24:18.229361   65068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0814 17:24:18.229376   65068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:24:18.361181   65068 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-422555
	
	I0814 17:24:18.361224   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetMachineName
	I0814 17:24:18.361483   65068 buildroot.go:166] provisioning hostname "kubernetes-upgrade-422555"
	I0814 17:24:18.361507   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetMachineName
	I0814 17:24:18.361734   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:24:18.365173   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:18.365667   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:18.365700   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:18.366036   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:24:18.366239   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:18.366398   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:18.366591   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:24:18.366766   65068 main.go:141] libmachine: Using SSH client type: native
	I0814 17:24:18.366979   65068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0814 17:24:18.366996   65068 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-422555 && echo "kubernetes-upgrade-422555" | sudo tee /etc/hostname
	I0814 17:24:18.510589   65068 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-422555
	
	I0814 17:24:18.510625   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:24:18.513904   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:18.514377   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:18.514410   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:18.514669   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:24:18.514869   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:18.515099   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:18.515291   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:24:18.515489   65068 main.go:141] libmachine: Using SSH client type: native
	I0814 17:24:18.515730   65068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0814 17:24:18.515759   65068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-422555' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-422555/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-422555' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:24:18.641125   65068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:24:18.641158   65068 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:24:18.641197   65068 buildroot.go:174] setting up certificates
	I0814 17:24:18.641210   65068 provision.go:84] configureAuth start
	I0814 17:24:18.641222   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetMachineName
	I0814 17:24:18.641518   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetIP
	I0814 17:24:18.644634   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:18.645184   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:18.645223   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:18.645657   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:24:18.649171   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:18.649577   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:18.649601   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:18.649867   65068 provision.go:143] copyHostCerts
	I0814 17:24:18.649930   65068 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:24:18.649968   65068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:24:18.650046   65068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:24:18.650158   65068 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:24:18.650168   65068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:24:18.650200   65068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:24:18.650268   65068 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:24:18.650276   65068 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:24:18.650298   65068 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:24:18.650392   65068 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-422555 san=[127.0.0.1 192.168.72.9 kubernetes-upgrade-422555 localhost minikube]
	I0814 17:24:18.933615   65068 provision.go:177] copyRemoteCerts
	I0814 17:24:18.933667   65068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:24:18.933689   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:24:18.937256   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:18.937846   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:18.937885   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:18.938077   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:24:18.938279   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:18.938487   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:24:18.938661   65068 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/id_rsa Username:docker}
	I0814 17:24:19.031641   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:24:19.059528   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:24:19.091589   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0814 17:24:19.126512   65068 provision.go:87] duration metric: took 485.286843ms to configureAuth
	I0814 17:24:19.126548   65068 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:24:19.126814   65068 config.go:182] Loaded profile config "kubernetes-upgrade-422555": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:24:19.126920   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:24:19.130668   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:19.131136   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:19.131165   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:19.131403   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:24:19.131804   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:19.132008   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:19.132166   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:24:19.132427   65068 main.go:141] libmachine: Using SSH client type: native
	I0814 17:24:19.132656   65068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0814 17:24:19.132683   65068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:24:17.474056   64848 provision.go:177] copyRemoteCerts
	I0814 17:24:17.474121   64848 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:24:17.474145   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHHostname
	I0814 17:24:17.477142   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.477498   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:17.477529   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.477750   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHPort
	I0814 17:24:17.477920   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:17.478040   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHUsername
	I0814 17:24:17.478202   64848 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kindnet-984053/id_rsa Username:docker}
	I0814 17:24:17.561257   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:24:17.586202   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0814 17:24:17.611239   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:24:17.637080   64848 provision.go:87] duration metric: took 436.745135ms to configureAuth
	I0814 17:24:17.637108   64848 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:24:17.637756   64848 config.go:182] Loaded profile config "kindnet-984053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:24:17.637867   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHHostname
	I0814 17:24:17.640879   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.641192   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:17.641216   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.641358   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHPort
	I0814 17:24:17.641545   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:17.641703   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:17.641872   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHUsername
	I0814 17:24:17.642056   64848 main.go:141] libmachine: Using SSH client type: native
	I0814 17:24:17.642263   64848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0814 17:24:17.642280   64848 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:24:17.927132   64848 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:24:17.927162   64848 main.go:141] libmachine: Checking connection to Docker...
	I0814 17:24:17.927173   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetURL
	I0814 17:24:17.928719   64848 main.go:141] libmachine: (kindnet-984053) DBG | Using libvirt version 6000000
	I0814 17:24:17.931481   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.931879   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:17.931909   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.932143   64848 main.go:141] libmachine: Docker is up and running!
	I0814 17:24:17.932160   64848 main.go:141] libmachine: Reticulating splines...
	I0814 17:24:17.932168   64848 client.go:171] duration metric: took 24.390854426s to LocalClient.Create
	I0814 17:24:17.932199   64848 start.go:167] duration metric: took 24.390925342s to libmachine.API.Create "kindnet-984053"
	I0814 17:24:17.932213   64848 start.go:293] postStartSetup for "kindnet-984053" (driver="kvm2")
	I0814 17:24:17.932227   64848 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:24:17.932249   64848 main.go:141] libmachine: (kindnet-984053) Calling .DriverName
	I0814 17:24:17.932506   64848 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:24:17.932534   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHHostname
	I0814 17:24:17.935376   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.935964   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:17.935997   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:17.936250   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHPort
	I0814 17:24:17.936451   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:17.936621   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHUsername
	I0814 17:24:17.936787   64848 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kindnet-984053/id_rsa Username:docker}
	I0814 17:24:18.030198   64848 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:24:18.034861   64848 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:24:18.034893   64848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:24:18.034953   64848 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:24:18.035034   64848 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:24:18.035127   64848 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:24:18.045079   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:24:18.073349   64848 start.go:296] duration metric: took 141.102831ms for postStartSetup
	I0814 17:24:18.073425   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetConfigRaw
	I0814 17:24:18.073962   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetIP
	I0814 17:24:18.077310   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:18.077698   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:18.077731   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:18.077969   64848 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/config.json ...
	I0814 17:24:18.078211   64848 start.go:128] duration metric: took 24.556953103s to createHost
	I0814 17:24:18.078256   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHHostname
	I0814 17:24:18.080865   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:18.081242   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:18.081279   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:18.081459   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHPort
	I0814 17:24:18.081707   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:18.081874   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:18.082037   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHUsername
	I0814 17:24:18.082235   64848 main.go:141] libmachine: Using SSH client type: native
	I0814 17:24:18.082453   64848 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.31 22 <nil> <nil>}
	I0814 17:24:18.082468   64848 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:24:18.196194   64848 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723656258.171088472
	
	I0814 17:24:18.196215   64848 fix.go:216] guest clock: 1723656258.171088472
	I0814 17:24:18.196226   64848 fix.go:229] Guest: 2024-08-14 17:24:18.171088472 +0000 UTC Remote: 2024-08-14 17:24:18.078225216 +0000 UTC m=+25.816978732 (delta=92.863256ms)
	I0814 17:24:18.196266   64848 fix.go:200] guest clock delta is within tolerance: 92.863256ms
	I0814 17:24:18.196276   64848 start.go:83] releasing machines lock for "kindnet-984053", held for 24.675116558s
	I0814 17:24:18.196306   64848 main.go:141] libmachine: (kindnet-984053) Calling .DriverName
	I0814 17:24:18.196600   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetIP
	I0814 17:24:18.199910   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:18.200462   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:18.200506   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:18.200726   64848 main.go:141] libmachine: (kindnet-984053) Calling .DriverName
	I0814 17:24:18.201421   64848 main.go:141] libmachine: (kindnet-984053) Calling .DriverName
	I0814 17:24:18.201620   64848 main.go:141] libmachine: (kindnet-984053) Calling .DriverName
	I0814 17:24:18.201733   64848 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:24:18.201787   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHHostname
	I0814 17:24:18.201844   64848 ssh_runner.go:195] Run: cat /version.json
	I0814 17:24:18.201869   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHHostname
	I0814 17:24:18.204804   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:18.205162   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:18.205191   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:18.205252   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:18.205469   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHPort
	I0814 17:24:18.205619   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:18.205648   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:18.205684   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:18.205771   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHUsername
	I0814 17:24:18.205854   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHPort
	I0814 17:24:18.205910   64848 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kindnet-984053/id_rsa Username:docker}
	I0814 17:24:18.205993   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHKeyPath
	I0814 17:24:18.206116   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetSSHUsername
	I0814 17:24:18.206232   64848 sshutil.go:53] new ssh client: &{IP:192.168.61.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kindnet-984053/id_rsa Username:docker}
	I0814 17:24:18.334253   64848 ssh_runner.go:195] Run: systemctl --version
	I0814 17:24:18.340657   64848 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:24:18.505239   64848 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:24:18.513057   64848 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:24:18.513123   64848 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:24:18.530144   64848 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:24:18.530172   64848 start.go:495] detecting cgroup driver to use...
	I0814 17:24:18.530244   64848 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:24:18.549094   64848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:24:18.566961   64848 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:24:18.567030   64848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:24:18.581276   64848 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:24:18.596652   64848 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:24:18.729731   64848 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:24:18.885469   64848 docker.go:233] disabling docker service ...
	I0814 17:24:18.885535   64848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:24:18.902908   64848 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:24:18.920650   64848 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:24:19.094407   64848 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:24:19.216878   64848 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:24:19.230320   64848 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:24:19.247665   64848 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:24:19.247744   64848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:19.260292   64848 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:24:19.260365   64848 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:19.275596   64848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:19.288766   64848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:19.301382   64848 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:24:19.313908   64848 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:19.327984   64848 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:19.347523   64848 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:19.360690   64848 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:24:19.373287   64848 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:24:19.373358   64848 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:24:19.387243   64848 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:24:19.397440   64848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:24:19.524896   64848 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:24:19.675259   64848 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:24:19.675357   64848 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:24:19.680043   64848 start.go:563] Will wait 60s for crictl version
	I0814 17:24:19.680092   64848 ssh_runner.go:195] Run: which crictl
	I0814 17:24:19.683521   64848 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:24:19.719639   64848 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:24:19.719726   64848 ssh_runner.go:195] Run: crio --version
	I0814 17:24:19.747453   64848 ssh_runner.go:195] Run: crio --version
	I0814 17:24:19.778228   64848 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:24:19.779397   64848 main.go:141] libmachine: (kindnet-984053) Calling .GetIP
	I0814 17:24:19.782478   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:19.782845   64848 main.go:141] libmachine: (kindnet-984053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:3a:65", ip: ""} in network mk-kindnet-984053: {Iface:virbr1 ExpiryTime:2024-08-14 18:24:08 +0000 UTC Type:0 Mac:52:54:00:f8:3a:65 Iaid: IPaddr:192.168.61.31 Prefix:24 Hostname:kindnet-984053 Clientid:01:52:54:00:f8:3a:65}
	I0814 17:24:19.782872   64848 main.go:141] libmachine: (kindnet-984053) DBG | domain kindnet-984053 has defined IP address 192.168.61.31 and MAC address 52:54:00:f8:3a:65 in network mk-kindnet-984053
	I0814 17:24:19.783156   64848 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 17:24:19.787257   64848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:24:19.799233   64848 kubeadm.go:883] updating cluster {Name:kindnet-984053 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:kindnet-984053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:24:19.799389   64848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:24:19.799446   64848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:24:19.834947   64848 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:24:19.835035   64848 ssh_runner.go:195] Run: which lz4
	I0814 17:24:19.838845   64848 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:24:19.842828   64848 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:24:19.842859   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:24:21.126141   64848 crio.go:462] duration metric: took 1.28734063s to copy over tarball
	I0814 17:24:21.126227   64848 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:24:23.279917   64848 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.153663767s)
	I0814 17:24:23.279946   64848 crio.go:469] duration metric: took 2.153770692s to extract the tarball
	I0814 17:24:23.279954   64848 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:24:23.316415   64848 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:24:23.358015   64848 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:24:23.358038   64848 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:24:23.358046   64848 kubeadm.go:934] updating node { 192.168.61.31 8443 v1.31.0 crio true true} ...
	I0814 17:24:23.358148   64848 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-984053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kindnet-984053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0814 17:24:23.358211   64848 ssh_runner.go:195] Run: crio config
	I0814 17:24:23.404026   64848 cni.go:84] Creating CNI manager for "kindnet"
	I0814 17:24:23.404046   64848 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:24:23.404065   64848 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.31 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-984053 NodeName:kindnet-984053 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:24:23.404198   64848 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-984053"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:24:23.404258   64848 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:24:23.413655   64848 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:24:23.413725   64848 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:24:23.422361   64848 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0814 17:24:23.437791   64848 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:24:23.453652   64848 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0814 17:24:23.471217   64848 ssh_runner.go:195] Run: grep 192.168.61.31	control-plane.minikube.internal$ /etc/hosts
	I0814 17:24:23.475076   64848 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:24:23.487070   64848 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:24:23.602500   64848 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:24:23.619903   64848 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053 for IP: 192.168.61.31
	I0814 17:24:23.619934   64848 certs.go:194] generating shared ca certs ...
	I0814 17:24:23.619955   64848 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:24:23.620181   64848 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:24:23.620245   64848 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:24:23.620261   64848 certs.go:256] generating profile certs ...
	I0814 17:24:23.620352   64848 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.key
	I0814 17:24:23.620381   64848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt with IP's: []
	I0814 17:24:23.797935   64848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt ...
	I0814 17:24:23.797964   64848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: {Name:mk5125066329eaecaa16a5eff0f1131acf407c09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:24:23.798127   64848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.key ...
	I0814 17:24:23.798139   64848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.key: {Name:mk33ce65076f4ffc7da11deda9e90165eae09f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:24:23.798211   64848 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/apiserver.key.409dc100
	I0814 17:24:23.798225   64848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/apiserver.crt.409dc100 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.31]
	I0814 17:24:23.906660   64848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/apiserver.crt.409dc100 ...
	I0814 17:24:23.906689   64848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/apiserver.crt.409dc100: {Name:mk6956b4d5c8050dc1b22e20c93b43ad46e0537a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:24:23.906855   64848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/apiserver.key.409dc100 ...
	I0814 17:24:23.906869   64848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/apiserver.key.409dc100: {Name:mk8c23bd0019ab60bc80828a2252faafcb21b2a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:24:23.906958   64848 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/apiserver.crt.409dc100 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/apiserver.crt
	I0814 17:24:23.907036   64848 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/apiserver.key.409dc100 -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/apiserver.key
	I0814 17:24:23.907088   64848 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/proxy-client.key
	I0814 17:24:23.907101   64848 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/proxy-client.crt with IP's: []
	I0814 17:24:24.182199   64848 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/proxy-client.crt ...
	I0814 17:24:24.182262   64848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/proxy-client.crt: {Name:mkfab696f51593ffc8106d6d32d7d83c133b3528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:24:24.182453   64848 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/proxy-client.key ...
	I0814 17:24:24.182468   64848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/proxy-client.key: {Name:mk3fa26a604544feb073cb017025ec29c26e2121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:24:24.182674   64848 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:24:24.182714   64848 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:24:24.182727   64848 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:24:24.182747   64848 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:24:24.182783   64848 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:24:24.182809   64848 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:24:24.182846   64848 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:24:24.183494   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:24:24.208040   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:24:24.230966   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:24:24.253841   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:24:24.276640   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0814 17:24:24.302026   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:24:24.324022   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:24:24.346290   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 17:24:24.369180   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:24:24.394520   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:24:24.422441   64848 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:24:24.448374   64848 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:24:24.465958   64848 ssh_runner.go:195] Run: openssl version
	I0814 17:24:24.472294   64848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:24:24.484357   64848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:24:24.488606   64848 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:24:24.488697   64848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:24:24.494544   64848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:24:24.505529   64848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:24:24.516363   64848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:24:24.520727   64848 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:24:24.520791   64848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:24:24.526958   64848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:24:24.539588   64848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:24:24.552500   64848 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:24:24.557994   64848 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:24:24.558061   64848 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:24:24.563645   64848 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:24:24.577978   64848 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:24:24.583718   64848 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 17:24:24.583775   64848 kubeadm.go:392] StartCluster: {Name:kindnet-984053 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:kindnet-984053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.31 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:24:24.583851   64848 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:24:24.583904   64848 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:24:24.626444   64848 cri.go:89] found id: ""
	I0814 17:24:24.626512   64848 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:24:24.638114   64848 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:24:24.650046   64848 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:24:24.663207   64848 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:24:24.663231   64848 kubeadm.go:157] found existing configuration files:
	
	I0814 17:24:24.663307   64848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:24:24.673157   64848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:24:24.673226   64848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:24:24.683048   64848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:24:24.691784   64848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:24:24.691845   64848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:24:24.708988   64848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:24:24.719081   64848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:24:24.719140   64848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:24:24.728369   64848 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:24:24.737202   64848 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:24:24.737276   64848 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:24:24.747120   64848 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:24:24.803302   64848 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:24:24.803402   64848 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:24:24.903274   64848 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:24:24.903446   64848 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:24:24.903586   64848 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:24:24.912558   64848 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:24:25.132192   64848 out.go:204]   - Generating certificates and keys ...
	I0814 17:24:25.132346   64848 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:24:25.132440   64848 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:24:25.132582   64848 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 17:24:25.226848   64848 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 17:24:25.366974   64848 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 17:24:25.457504   64848 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 17:24:25.549177   64848 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 17:24:25.549292   64848 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-984053 localhost] and IPs [192.168.61.31 127.0.0.1 ::1]
	I0814 17:24:25.735293   64848 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 17:24:25.735476   64848 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-984053 localhost] and IPs [192.168.61.31 127.0.0.1 ::1]
	I0814 17:24:26.093745   64848 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 17:24:26.284959   64848 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 17:24:26.437866   64848 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 17:24:26.437964   64848 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:24:26.945678   64848 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:24:27.119222   64848 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:24:27.314268   64848 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:24:27.600138   64848 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:24:27.771042   64848 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:24:27.771741   64848 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:24:27.775122   64848 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:24:27.272822   65068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:24:27.272850   65068 machine.go:97] duration metric: took 9.048133091s to provisionDockerMachine
	I0814 17:24:27.272865   65068 start.go:293] postStartSetup for "kubernetes-upgrade-422555" (driver="kvm2")
	I0814 17:24:27.272878   65068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:24:27.272913   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:24:27.273238   65068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:24:27.273270   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:24:27.275955   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:27.276304   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:27.276331   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:27.276484   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:24:27.276645   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:27.276858   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:24:27.277051   65068 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/id_rsa Username:docker}
	I0814 17:24:27.366179   65068 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:24:27.370227   65068 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:24:27.370252   65068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:24:27.370319   65068 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:24:27.370426   65068 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:24:27.370570   65068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:24:27.380356   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:24:27.404907   65068 start.go:296] duration metric: took 132.027907ms for postStartSetup
	I0814 17:24:27.404951   65068 fix.go:56] duration metric: took 9.208560643s for fixHost
	I0814 17:24:27.404972   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:24:27.408436   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:27.408941   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:27.408975   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:27.409176   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:24:27.409443   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:27.409616   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:27.409787   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:24:27.409962   65068 main.go:141] libmachine: Using SSH client type: native
	I0814 17:24:27.410205   65068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.9 22 <nil> <nil>}
	I0814 17:24:27.410222   65068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:24:27.524112   65068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723656267.505743036
	
	I0814 17:24:27.524140   65068 fix.go:216] guest clock: 1723656267.505743036
	I0814 17:24:27.524149   65068 fix.go:229] Guest: 2024-08-14 17:24:27.505743036 +0000 UTC Remote: 2024-08-14 17:24:27.404954938 +0000 UTC m=+22.772711268 (delta=100.788098ms)
	I0814 17:24:27.524173   65068 fix.go:200] guest clock delta is within tolerance: 100.788098ms
	I0814 17:24:27.524178   65068 start.go:83] releasing machines lock for "kubernetes-upgrade-422555", held for 9.327820726s
	I0814 17:24:27.524195   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:24:27.524513   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetIP
	I0814 17:24:27.527730   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:27.528077   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:27.528114   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:27.528277   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:24:27.528828   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:24:27.529001   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .DriverName
	I0814 17:24:27.529100   65068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:24:27.529148   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:24:27.529262   65068 ssh_runner.go:195] Run: cat /version.json
	I0814 17:24:27.529281   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHHostname
	I0814 17:24:27.532008   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:27.532036   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:27.532395   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:27.532432   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:27.532460   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:27.532777   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:27.532797   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:24:27.532801   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHPort
	I0814 17:24:27.532974   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:27.532983   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHKeyPath
	I0814 17:24:27.533146   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:24:27.533150   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetSSHUsername
	I0814 17:24:27.533302   65068 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/id_rsa Username:docker}
	I0814 17:24:27.533362   65068 sshutil.go:53] new ssh client: &{IP:192.168.72.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/kubernetes-upgrade-422555/id_rsa Username:docker}
	I0814 17:24:27.649875   65068 ssh_runner.go:195] Run: systemctl --version
	I0814 17:24:27.658397   65068 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:24:27.815131   65068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:24:27.843949   65068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:24:27.844036   65068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:24:27.866570   65068 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0814 17:24:27.866598   65068 start.go:495] detecting cgroup driver to use...
	I0814 17:24:27.866676   65068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:24:27.942510   65068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:24:27.976710   65068 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:24:27.976772   65068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:24:28.160205   65068 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:24:28.233054   65068 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:24:28.540059   65068 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:24:28.951091   65068 docker.go:233] disabling docker service ...
	I0814 17:24:28.951173   65068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:24:29.040632   65068 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:24:29.083297   65068 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:24:29.348657   65068 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:24:29.597816   65068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:24:29.618463   65068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:24:29.639788   65068 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:24:29.639878   65068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:29.654841   65068 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:24:29.654916   65068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:29.671120   65068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:29.683673   65068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:29.700304   65068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:24:29.713090   65068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:29.727050   65068 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:29.741169   65068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:24:29.760816   65068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:24:29.776332   65068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:24:29.786976   65068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:24:30.026332   65068 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:24:30.695333   65068 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:24:30.695448   65068 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:24:30.699768   65068 start.go:563] Will wait 60s for crictl version
	I0814 17:24:30.699828   65068 ssh_runner.go:195] Run: which crictl
	I0814 17:24:30.703234   65068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:24:30.741352   65068 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:24:30.741453   65068 ssh_runner.go:195] Run: crio --version
	I0814 17:24:30.772762   65068 ssh_runner.go:195] Run: crio --version
	I0814 17:24:30.809054   65068 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:24:27.776492   64848 out.go:204]   - Booting up control plane ...
	I0814 17:24:27.776598   64848 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:24:27.776725   64848 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:24:27.777598   64848 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:24:27.800044   64848 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:24:27.807934   64848 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:24:27.808008   64848 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:24:27.990543   64848 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:24:27.990698   64848 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:24:28.990622   64848 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002206431s
	I0814 17:24:28.990773   64848 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:24:34.495412   64848 kubeadm.go:310] [api-check] The API server is healthy after 5.504995803s
	I0814 17:24:34.515227   64848 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:24:34.537948   64848 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:24:34.574947   64848 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:24:34.575106   64848 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-984053 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:24:34.591540   64848 kubeadm.go:310] [bootstrap-token] Using token: fvvmwv.9dfdtm3zhhaynqhq
	I0814 17:24:30.810462   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) Calling .GetIP
	I0814 17:24:30.813707   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:30.814192   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:c9:3b", ip: ""} in network mk-kubernetes-upgrade-422555: {Iface:virbr4 ExpiryTime:2024-08-14 18:23:35 +0000 UTC Type:0 Mac:52:54:00:7b:c9:3b Iaid: IPaddr:192.168.72.9 Prefix:24 Hostname:kubernetes-upgrade-422555 Clientid:01:52:54:00:7b:c9:3b}
	I0814 17:24:30.814229   65068 main.go:141] libmachine: (kubernetes-upgrade-422555) DBG | domain kubernetes-upgrade-422555 has defined IP address 192.168.72.9 and MAC address 52:54:00:7b:c9:3b in network mk-kubernetes-upgrade-422555
	I0814 17:24:30.814448   65068 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:24:30.818754   65068 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-422555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-422555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:24:30.818935   65068 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:24:30.819001   65068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:24:30.864919   65068 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:24:30.864946   65068 crio.go:433] Images already preloaded, skipping extraction
	I0814 17:24:30.865006   65068 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:24:30.902381   65068 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:24:30.902415   65068 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:24:30.902426   65068 kubeadm.go:934] updating node { 192.168.72.9 8443 v1.31.0 crio true true} ...
	I0814 17:24:30.902555   65068 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-422555 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-422555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:24:30.902649   65068 ssh_runner.go:195] Run: crio config
	I0814 17:24:30.957103   65068 cni.go:84] Creating CNI manager for ""
	I0814 17:24:30.957126   65068 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:24:30.957134   65068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:24:30.957154   65068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.9 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-422555 NodeName:kubernetes-upgrade-422555 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:24:30.957332   65068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-422555"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.9
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.9"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:24:30.957409   65068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:24:30.968341   65068 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:24:30.968423   65068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:24:30.978686   65068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0814 17:24:30.997859   65068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:24:31.015648   65068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0814 17:24:31.035533   65068 ssh_runner.go:195] Run: grep 192.168.72.9	control-plane.minikube.internal$ /etc/hosts
	I0814 17:24:31.039224   65068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:24:31.225864   65068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:24:31.272959   65068 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555 for IP: 192.168.72.9
	I0814 17:24:31.272979   65068 certs.go:194] generating shared ca certs ...
	I0814 17:24:31.272994   65068 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:24:31.273166   65068 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:24:31.273235   65068 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:24:31.273250   65068 certs.go:256] generating profile certs ...
	I0814 17:24:31.273372   65068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/client.key
	I0814 17:24:31.273448   65068 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.key.4b2808ac
	I0814 17:24:31.273506   65068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/proxy-client.key
	I0814 17:24:31.273657   65068 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:24:31.273711   65068 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:24:31.273726   65068 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:24:31.273759   65068 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:24:31.273794   65068 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:24:31.273829   65068 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:24:31.273892   65068 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:24:31.274569   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:24:31.458116   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:24:31.658383   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:24:31.701381   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:24:31.768646   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0814 17:24:31.801428   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:24:31.835007   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:24:31.865914   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kubernetes-upgrade-422555/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0814 17:24:31.907843   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:24:31.952400   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:24:31.989875   65068 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:24:32.036232   65068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:24:32.059270   65068 ssh_runner.go:195] Run: openssl version
	I0814 17:24:32.064928   65068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:24:32.075613   65068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:24:32.080163   65068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:24:32.080264   65068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:24:32.085886   65068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:24:32.098477   65068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:24:32.112606   65068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:24:32.117787   65068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:24:32.117856   65068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:24:32.123425   65068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:24:32.132776   65068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:24:32.144303   65068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:24:32.148789   65068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:24:32.148843   65068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:24:32.154250   65068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:24:32.164284   65068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:24:32.169109   65068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:24:32.174652   65068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:24:32.180428   65068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:24:32.185845   65068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:24:32.191595   65068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:24:32.198759   65068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:24:32.204465   65068 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-422555 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-422555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.9 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:24:32.204558   65068 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:24:32.204627   65068 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:24:32.265612   65068 cri.go:89] found id: "a4076491f219c0fc4dd01846b6caeb08840b8fde8cfe5b44839f4ed689f39110"
	I0814 17:24:32.265642   65068 cri.go:89] found id: "4b1276f3768dac99871a420beaac9a7b2b64d315b56a983a6741d1ce57a32b2c"
	I0814 17:24:32.265648   65068 cri.go:89] found id: "8164bc253bb765f0d14e46280cdd4a8693fdb105b791da276a408dc149243129"
	I0814 17:24:32.265654   65068 cri.go:89] found id: "db14f831bfbd10337e0dae3c6f3168a7bce0e557ffff0b6667458e25399f11e5"
	I0814 17:24:32.265673   65068 cri.go:89] found id: "9fc1e0111145bb3a1ca923c6099ca33acd5d158a4261fecbfb3c04fd2027c934"
	I0814 17:24:32.265678   65068 cri.go:89] found id: "cbbc4dc7ede12090c04bd4b98f4d60b065ffca0b5358551fce8b05dea2bcc3c8"
	I0814 17:24:32.265682   65068 cri.go:89] found id: "195b10d99ccf24e57618460daffb3e2215d2c8b0052b9a05e1f42123c6be3aaf"
	I0814 17:24:32.265686   65068 cri.go:89] found id: "98fa4ffbe11d05bdb5744acdf6903b7f5044bc5b3e14ce6a1b2cb03f5da89674"
	I0814 17:24:32.265690   65068 cri.go:89] found id: ""
	I0814 17:24:32.265752   65068 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.517586125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723656281517558292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a489341-d7e3-4357-bfac-60ea06b47730 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.518285854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7110d28a-4764-4869-a6a0-d79f902270be name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.518359277Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7110d28a-4764-4869-a6a0-d79f902270be name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.518856369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:401bd5f1b1356cc0b688c669ff66d56ddbc369b5636c93af2333582f2fef8a01,PodSandboxId:00bc95b5148895e5f66b4d99d4bfa7e7126e6e20ce8b35f4ffa37f8d130c6f59,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723656278067965225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vc4k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa3f0a7-e14c-4411-8612-954fd1789ece,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5cf7bf76c7e5ac19fff2aedfd71bf86edf05f53da9a6d48492376fe21b80306,PodSandboxId:5ae4edb479468c393da7a25831c005811eb8ecdc0fd5c77a189b3021522a4deb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723656278067204762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55crq,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: de6d8627-c52a-4a60-8bd3-a927defcb614,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1015ce5b7da5c92f4f06c90311a1b50cb6fd8751ecd8ee72cb18cf8d74bb333,PodSandboxId:2d47771750025326458ca26cede0c6cfbf69d4ce7a56e3d68f859115559ecdb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAIN
ER_RUNNING,CreatedAt:1723656278054196539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qldv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8537e4ae-f68a-49f0-bb3d-8acd56fe6eb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cce40e799a3ab332c6716d49c6c9ffe0a58adb146c423390d4bd5a94e4671e5,PodSandboxId:b5b054d35e8f44d4e1cc1313cbcc863304ad3db4e1fd2390a2e06dbc97a739e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723
656278044107345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf4942d-2be2-45c0-93a3-0c70c6cf2ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4fcf2887a30abd5bcd196ceada9f61915297d583a25bce87c485ac8dd635a58,PodSandboxId:b139e3c04d3a56f12d50a546ffaca4794166bb2d01edab418e29df57818f35b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723656274256288030,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a990392f98860246d1b6c77dbd33ef2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7303697bc9e41dcfdbcba39cde662be1a0fce307142b53674f08cc6c8f2c7366,PodSandboxId:05738a08a2e68d5e1b56bf95859bf9cebd90ae43900baab2445abfbefe620aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723656274287596407,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 309ab58268bc4b632d2e743042f37c26,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2693d4a0cf2b23a7a96b7f0ef6df79a74b757d103342a39a88b96306b19ee045,PodSandboxId:c85703101c0742c50703cd38dcf108cd25c021c7147c7f266547e858e232881b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723656274299351793,Labels:map[string]string{io.kube
rnetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f8c16b8a69fcc8031dc901991020e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1951e6919c826a1df166b41e418a17c4612df4beba486b97203984dd6c9b03fa,PodSandboxId:f170996055ff83bf25d67846ab6a082035f2433a5c8099c046183da414234b80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723656274269090228,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44dbdc56ef83022ed83999bc2e616148,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4076491f219c0fc4dd01846b6caeb08840b8fde8cfe5b44839f4ed689f39110,PodSandboxId:156d92adcb92ccecbfa910d641b8539d88e924dab269be95eab01fd031a28b68,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723656269877064309,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vc4k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa3f0a7-e14c-4411-8612-954fd1789ece,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195b10d99ccf24e57618460daffb3e2215d2c8b0052b9a05e1f42123c6be3aaf,PodSandboxId:e72d1f717497bb60a8354effaeafb2a733f795ad412cc3493bea274eaddb14f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723656268496178603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qldv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8537e4ae-f68a-49f0-bb3d-8acd56fe6eb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8164bc253bb765f0d14e46280cdd4a8693fdb105b791da276a408dc149243129,PodSandboxId:e0fa1d3bcf36e86af2eb0f70b288c19f1658c7dead9fa431f4ac28adf777ed1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723656268601513491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf4942d-2be2-45c0-93a3-0c70c6cf2ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b1276f3768dac99871a420beaac9a7b2b64d315b56a983a6741d1ce57a32b2c,PodSandboxId:0509749c82661cd162d174c88a87089eb1cd7517364287505b47a5c205dc4790,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb0
1a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723656269491894805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6d8627-c52a-4a60-8bd3-a927defcb614,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc1e0111145bb3a1ca923c6099ca33acd5d158a4261fecbfb3c04fd2027c934,PodSandboxId:fda3e722abfcaa4ea251084ee9a1d842657a93b6f3a1b0ae4387d610f7e25304,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723656268593112391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 309ab58268bc4b632d2e743042f37c26,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db14f831bfbd10337e0dae3c6f3168a7bce0e557ffff0b6667458e25399f11e5,PodSandboxId:ffe735f12da1260b5daba45b363b923af9c0d3e790865cd9e843b68951c05638,Metadata:&ContainerMetadata{Name:etc
d,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723656268598446281,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a990392f98860246d1b6c77dbd33ef2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbc4dc7ede12090c04bd4b98f4d60b065ffca0b5358551fce8b05dea2bcc3c8,PodSandboxId:189afda38c676ffdbe00700c22c48adcdf945d82c25c802ffa0ec55730d50436,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Image
Spec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723656268525344251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44dbdc56ef83022ed83999bc2e616148,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98fa4ffbe11d05bdb5744acdf6903b7f5044bc5b3e14ce6a1b2cb03f5da89674,PodSandboxId:ecce279e7d4724613906b2e7fe31cb68f63fb00f6b5b82c62e916f01c2c5bc74,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723656268346483672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f8c16b8a69fcc8031dc901991020e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7110d28a-4764-4869-a6a0-d79f902270be name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.563031261Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f46fdde-fd1d-4752-9c46-04b1faa69184 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.563127678Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f46fdde-fd1d-4752-9c46-04b1faa69184 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.564921017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca4287cb-653c-4c47-8441-4e0646640f4c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.565660633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723656281565616395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca4287cb-653c-4c47-8441-4e0646640f4c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.566405043Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5af05367-b72c-471a-8610-e4ebd1699dbb name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.566459590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5af05367-b72c-471a-8610-e4ebd1699dbb name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.566896513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:401bd5f1b1356cc0b688c669ff66d56ddbc369b5636c93af2333582f2fef8a01,PodSandboxId:00bc95b5148895e5f66b4d99d4bfa7e7126e6e20ce8b35f4ffa37f8d130c6f59,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723656278067965225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vc4k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa3f0a7-e14c-4411-8612-954fd1789ece,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5cf7bf76c7e5ac19fff2aedfd71bf86edf05f53da9a6d48492376fe21b80306,PodSandboxId:5ae4edb479468c393da7a25831c005811eb8ecdc0fd5c77a189b3021522a4deb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723656278067204762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55crq,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: de6d8627-c52a-4a60-8bd3-a927defcb614,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1015ce5b7da5c92f4f06c90311a1b50cb6fd8751ecd8ee72cb18cf8d74bb333,PodSandboxId:2d47771750025326458ca26cede0c6cfbf69d4ce7a56e3d68f859115559ecdb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAIN
ER_RUNNING,CreatedAt:1723656278054196539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qldv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8537e4ae-f68a-49f0-bb3d-8acd56fe6eb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cce40e799a3ab332c6716d49c6c9ffe0a58adb146c423390d4bd5a94e4671e5,PodSandboxId:b5b054d35e8f44d4e1cc1313cbcc863304ad3db4e1fd2390a2e06dbc97a739e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723
656278044107345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf4942d-2be2-45c0-93a3-0c70c6cf2ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4fcf2887a30abd5bcd196ceada9f61915297d583a25bce87c485ac8dd635a58,PodSandboxId:b139e3c04d3a56f12d50a546ffaca4794166bb2d01edab418e29df57818f35b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723656274256288030,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a990392f98860246d1b6c77dbd33ef2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7303697bc9e41dcfdbcba39cde662be1a0fce307142b53674f08cc6c8f2c7366,PodSandboxId:05738a08a2e68d5e1b56bf95859bf9cebd90ae43900baab2445abfbefe620aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723656274287596407,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 309ab58268bc4b632d2e743042f37c26,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2693d4a0cf2b23a7a96b7f0ef6df79a74b757d103342a39a88b96306b19ee045,PodSandboxId:c85703101c0742c50703cd38dcf108cd25c021c7147c7f266547e858e232881b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723656274299351793,Labels:map[string]string{io.kube
rnetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f8c16b8a69fcc8031dc901991020e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1951e6919c826a1df166b41e418a17c4612df4beba486b97203984dd6c9b03fa,PodSandboxId:f170996055ff83bf25d67846ab6a082035f2433a5c8099c046183da414234b80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723656274269090228,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44dbdc56ef83022ed83999bc2e616148,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4076491f219c0fc4dd01846b6caeb08840b8fde8cfe5b44839f4ed689f39110,PodSandboxId:156d92adcb92ccecbfa910d641b8539d88e924dab269be95eab01fd031a28b68,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723656269877064309,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vc4k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa3f0a7-e14c-4411-8612-954fd1789ece,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195b10d99ccf24e57618460daffb3e2215d2c8b0052b9a05e1f42123c6be3aaf,PodSandboxId:e72d1f717497bb60a8354effaeafb2a733f795ad412cc3493bea274eaddb14f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723656268496178603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qldv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8537e4ae-f68a-49f0-bb3d-8acd56fe6eb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8164bc253bb765f0d14e46280cdd4a8693fdb105b791da276a408dc149243129,PodSandboxId:e0fa1d3bcf36e86af2eb0f70b288c19f1658c7dead9fa431f4ac28adf777ed1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723656268601513491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf4942d-2be2-45c0-93a3-0c70c6cf2ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b1276f3768dac99871a420beaac9a7b2b64d315b56a983a6741d1ce57a32b2c,PodSandboxId:0509749c82661cd162d174c88a87089eb1cd7517364287505b47a5c205dc4790,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb0
1a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723656269491894805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6d8627-c52a-4a60-8bd3-a927defcb614,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc1e0111145bb3a1ca923c6099ca33acd5d158a4261fecbfb3c04fd2027c934,PodSandboxId:fda3e722abfcaa4ea251084ee9a1d842657a93b6f3a1b0ae4387d610f7e25304,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723656268593112391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 309ab58268bc4b632d2e743042f37c26,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db14f831bfbd10337e0dae3c6f3168a7bce0e557ffff0b6667458e25399f11e5,PodSandboxId:ffe735f12da1260b5daba45b363b923af9c0d3e790865cd9e843b68951c05638,Metadata:&ContainerMetadata{Name:etc
d,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723656268598446281,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a990392f98860246d1b6c77dbd33ef2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbc4dc7ede12090c04bd4b98f4d60b065ffca0b5358551fce8b05dea2bcc3c8,PodSandboxId:189afda38c676ffdbe00700c22c48adcdf945d82c25c802ffa0ec55730d50436,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Image
Spec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723656268525344251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44dbdc56ef83022ed83999bc2e616148,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98fa4ffbe11d05bdb5744acdf6903b7f5044bc5b3e14ce6a1b2cb03f5da89674,PodSandboxId:ecce279e7d4724613906b2e7fe31cb68f63fb00f6b5b82c62e916f01c2c5bc74,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723656268346483672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f8c16b8a69fcc8031dc901991020e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5af05367-b72c-471a-8610-e4ebd1699dbb name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.615681931Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a03e08c-3798-4c9b-814e-67016d70fed4 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.615871170Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a03e08c-3798-4c9b-814e-67016d70fed4 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.617423450Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f392608-40a1-4a70-8fda-636b6de65bee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.618003093Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723656281617978541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f392608-40a1-4a70-8fda-636b6de65bee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.618610270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78f86d7d-b6b4-4845-b44c-e6fa67ca8669 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.618681248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78f86d7d-b6b4-4845-b44c-e6fa67ca8669 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.619110930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:401bd5f1b1356cc0b688c669ff66d56ddbc369b5636c93af2333582f2fef8a01,PodSandboxId:00bc95b5148895e5f66b4d99d4bfa7e7126e6e20ce8b35f4ffa37f8d130c6f59,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723656278067965225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vc4k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa3f0a7-e14c-4411-8612-954fd1789ece,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5cf7bf76c7e5ac19fff2aedfd71bf86edf05f53da9a6d48492376fe21b80306,PodSandboxId:5ae4edb479468c393da7a25831c005811eb8ecdc0fd5c77a189b3021522a4deb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723656278067204762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55crq,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: de6d8627-c52a-4a60-8bd3-a927defcb614,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1015ce5b7da5c92f4f06c90311a1b50cb6fd8751ecd8ee72cb18cf8d74bb333,PodSandboxId:2d47771750025326458ca26cede0c6cfbf69d4ce7a56e3d68f859115559ecdb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAIN
ER_RUNNING,CreatedAt:1723656278054196539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qldv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8537e4ae-f68a-49f0-bb3d-8acd56fe6eb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cce40e799a3ab332c6716d49c6c9ffe0a58adb146c423390d4bd5a94e4671e5,PodSandboxId:b5b054d35e8f44d4e1cc1313cbcc863304ad3db4e1fd2390a2e06dbc97a739e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723
656278044107345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf4942d-2be2-45c0-93a3-0c70c6cf2ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4fcf2887a30abd5bcd196ceada9f61915297d583a25bce87c485ac8dd635a58,PodSandboxId:b139e3c04d3a56f12d50a546ffaca4794166bb2d01edab418e29df57818f35b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723656274256288030,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a990392f98860246d1b6c77dbd33ef2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7303697bc9e41dcfdbcba39cde662be1a0fce307142b53674f08cc6c8f2c7366,PodSandboxId:05738a08a2e68d5e1b56bf95859bf9cebd90ae43900baab2445abfbefe620aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723656274287596407,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 309ab58268bc4b632d2e743042f37c26,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2693d4a0cf2b23a7a96b7f0ef6df79a74b757d103342a39a88b96306b19ee045,PodSandboxId:c85703101c0742c50703cd38dcf108cd25c021c7147c7f266547e858e232881b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723656274299351793,Labels:map[string]string{io.kube
rnetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f8c16b8a69fcc8031dc901991020e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1951e6919c826a1df166b41e418a17c4612df4beba486b97203984dd6c9b03fa,PodSandboxId:f170996055ff83bf25d67846ab6a082035f2433a5c8099c046183da414234b80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723656274269090228,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44dbdc56ef83022ed83999bc2e616148,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4076491f219c0fc4dd01846b6caeb08840b8fde8cfe5b44839f4ed689f39110,PodSandboxId:156d92adcb92ccecbfa910d641b8539d88e924dab269be95eab01fd031a28b68,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723656269877064309,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vc4k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa3f0a7-e14c-4411-8612-954fd1789ece,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195b10d99ccf24e57618460daffb3e2215d2c8b0052b9a05e1f42123c6be3aaf,PodSandboxId:e72d1f717497bb60a8354effaeafb2a733f795ad412cc3493bea274eaddb14f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723656268496178603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qldv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8537e4ae-f68a-49f0-bb3d-8acd56fe6eb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8164bc253bb765f0d14e46280cdd4a8693fdb105b791da276a408dc149243129,PodSandboxId:e0fa1d3bcf36e86af2eb0f70b288c19f1658c7dead9fa431f4ac28adf777ed1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723656268601513491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf4942d-2be2-45c0-93a3-0c70c6cf2ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b1276f3768dac99871a420beaac9a7b2b64d315b56a983a6741d1ce57a32b2c,PodSandboxId:0509749c82661cd162d174c88a87089eb1cd7517364287505b47a5c205dc4790,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb0
1a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723656269491894805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6d8627-c52a-4a60-8bd3-a927defcb614,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc1e0111145bb3a1ca923c6099ca33acd5d158a4261fecbfb3c04fd2027c934,PodSandboxId:fda3e722abfcaa4ea251084ee9a1d842657a93b6f3a1b0ae4387d610f7e25304,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723656268593112391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 309ab58268bc4b632d2e743042f37c26,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db14f831bfbd10337e0dae3c6f3168a7bce0e557ffff0b6667458e25399f11e5,PodSandboxId:ffe735f12da1260b5daba45b363b923af9c0d3e790865cd9e843b68951c05638,Metadata:&ContainerMetadata{Name:etc
d,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723656268598446281,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a990392f98860246d1b6c77dbd33ef2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbc4dc7ede12090c04bd4b98f4d60b065ffca0b5358551fce8b05dea2bcc3c8,PodSandboxId:189afda38c676ffdbe00700c22c48adcdf945d82c25c802ffa0ec55730d50436,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Image
Spec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723656268525344251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44dbdc56ef83022ed83999bc2e616148,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98fa4ffbe11d05bdb5744acdf6903b7f5044bc5b3e14ce6a1b2cb03f5da89674,PodSandboxId:ecce279e7d4724613906b2e7fe31cb68f63fb00f6b5b82c62e916f01c2c5bc74,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723656268346483672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f8c16b8a69fcc8031dc901991020e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78f86d7d-b6b4-4845-b44c-e6fa67ca8669 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.653544816Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=741a9317-86f0-4d9c-bec1-adba5434f47a name=/runtime.v1.RuntimeService/Version
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.653619602Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=741a9317-86f0-4d9c-bec1-adba5434f47a name=/runtime.v1.RuntimeService/Version
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.654616319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad885e59-5d14-49cb-b252-6635a4409817 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.655034287Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723656281655009346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad885e59-5d14-49cb-b252-6635a4409817 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.655591899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a6ac505-9c4f-4069-8c50-57bb9e9fb5da name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.655649085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a6ac505-9c4f-4069-8c50-57bb9e9fb5da name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:24:41 kubernetes-upgrade-422555 crio[3002]: time="2024-08-14 17:24:41.657051225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:401bd5f1b1356cc0b688c669ff66d56ddbc369b5636c93af2333582f2fef8a01,PodSandboxId:00bc95b5148895e5f66b4d99d4bfa7e7126e6e20ce8b35f4ffa37f8d130c6f59,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723656278067965225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vc4k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa3f0a7-e14c-4411-8612-954fd1789ece,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5cf7bf76c7e5ac19fff2aedfd71bf86edf05f53da9a6d48492376fe21b80306,PodSandboxId:5ae4edb479468c393da7a25831c005811eb8ecdc0fd5c77a189b3021522a4deb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723656278067204762,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55crq,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: de6d8627-c52a-4a60-8bd3-a927defcb614,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1015ce5b7da5c92f4f06c90311a1b50cb6fd8751ecd8ee72cb18cf8d74bb333,PodSandboxId:2d47771750025326458ca26cede0c6cfbf69d4ce7a56e3d68f859115559ecdb1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAIN
ER_RUNNING,CreatedAt:1723656278054196539,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qldv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8537e4ae-f68a-49f0-bb3d-8acd56fe6eb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cce40e799a3ab332c6716d49c6c9ffe0a58adb146c423390d4bd5a94e4671e5,PodSandboxId:b5b054d35e8f44d4e1cc1313cbcc863304ad3db4e1fd2390a2e06dbc97a739e6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723
656278044107345,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf4942d-2be2-45c0-93a3-0c70c6cf2ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4fcf2887a30abd5bcd196ceada9f61915297d583a25bce87c485ac8dd635a58,PodSandboxId:b139e3c04d3a56f12d50a546ffaca4794166bb2d01edab418e29df57818f35b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723656274256288030,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a990392f98860246d1b6c77dbd33ef2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7303697bc9e41dcfdbcba39cde662be1a0fce307142b53674f08cc6c8f2c7366,PodSandboxId:05738a08a2e68d5e1b56bf95859bf9cebd90ae43900baab2445abfbefe620aff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723656274287596407,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 309ab58268bc4b632d2e743042f37c26,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2693d4a0cf2b23a7a96b7f0ef6df79a74b757d103342a39a88b96306b19ee045,PodSandboxId:c85703101c0742c50703cd38dcf108cd25c021c7147c7f266547e858e232881b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723656274299351793,Labels:map[string]string{io.kube
rnetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f8c16b8a69fcc8031dc901991020e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1951e6919c826a1df166b41e418a17c4612df4beba486b97203984dd6c9b03fa,PodSandboxId:f170996055ff83bf25d67846ab6a082035f2433a5c8099c046183da414234b80,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723656274269090228,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44dbdc56ef83022ed83999bc2e616148,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4076491f219c0fc4dd01846b6caeb08840b8fde8cfe5b44839f4ed689f39110,PodSandboxId:156d92adcb92ccecbfa910d641b8539d88e924dab269be95eab01fd031a28b68,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723656269877064309,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vc4k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fa3f0a7-e14c-4411-8612-954fd1789ece,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195b10d99ccf24e57618460daffb3e2215d2c8b0052b9a05e1f42123c6be3aaf,PodSandboxId:e72d1f717497bb60a8354effaeafb2a733f795ad412cc3493bea274eaddb14f6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723656268496178603,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qldv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8537e4ae-f68a-49f0-bb3d-8acd56fe6eb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8164bc253bb765f0d14e46280cdd4a8693fdb105b791da276a408dc149243129,PodSandboxId:e0fa1d3bcf36e86af2eb0f70b288c19f1658c7dead9fa431f4ac28adf777ed1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723656268601513491,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf4942d-2be2-45c0-93a3-0c70c6cf2ba4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b1276f3768dac99871a420beaac9a7b2b64d315b56a983a6741d1ce57a32b2c,PodSandboxId:0509749c82661cd162d174c88a87089eb1cd7517364287505b47a5c205dc4790,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb0
1a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723656269491894805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-55crq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6d8627-c52a-4a60-8bd3-a927defcb614,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9fc1e0111145bb3a1ca923c6099ca33acd5d158a4261fecbfb3c04fd2027c934,PodSandboxId:fda3e722abfcaa4ea251084ee9a1d842657a93b6f3a1b0ae4387d610f7e25304,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723656268593112391,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 309ab58268bc4b632d2e743042f37c26,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db14f831bfbd10337e0dae3c6f3168a7bce0e557ffff0b6667458e25399f11e5,PodSandboxId:ffe735f12da1260b5daba45b363b923af9c0d3e790865cd9e843b68951c05638,Metadata:&ContainerMetadata{Name:etc
d,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723656268598446281,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a990392f98860246d1b6c77dbd33ef2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbbc4dc7ede12090c04bd4b98f4d60b065ffca0b5358551fce8b05dea2bcc3c8,PodSandboxId:189afda38c676ffdbe00700c22c48adcdf945d82c25c802ffa0ec55730d50436,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Image
Spec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723656268525344251,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44dbdc56ef83022ed83999bc2e616148,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98fa4ffbe11d05bdb5744acdf6903b7f5044bc5b3e14ce6a1b2cb03f5da89674,PodSandboxId:ecce279e7d4724613906b2e7fe31cb68f63fb00f6b5b82c62e916f01c2c5bc74,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&Im
ageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723656268346483672,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-422555,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f8c16b8a69fcc8031dc901991020e6,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a6ac505-9c4f-4069-8c50-57bb9e9fb5da name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	401bd5f1b1356       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   00bc95b514889       coredns-6f6b679f8f-vc4k7
	c5cf7bf76c7e5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   5ae4edb479468       coredns-6f6b679f8f-55crq
	d1015ce5b7da5       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   3 seconds ago       Running             kube-proxy                2                   2d47771750025       kube-proxy-5qldv
	4cce40e799a3a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   b5b054d35e8f4       storage-provisioner
	2693d4a0cf2b2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   7 seconds ago       Running             kube-controller-manager   2                   c85703101c074       kube-controller-manager-kubernetes-upgrade-422555
	7303697bc9e41       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   7 seconds ago       Running             kube-scheduler            2                   05738a08a2e68       kube-scheduler-kubernetes-upgrade-422555
	1951e6919c826       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   7 seconds ago       Running             kube-apiserver            2                   f170996055ff8       kube-apiserver-kubernetes-upgrade-422555
	b4fcf2887a30a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   b139e3c04d3a5       etcd-kubernetes-upgrade-422555
	a4076491f219c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   11 seconds ago      Exited              coredns                   1                   156d92adcb92c       coredns-6f6b679f8f-vc4k7
	4b1276f3768da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   12 seconds ago      Exited              coredns                   1                   0509749c82661       coredns-6f6b679f8f-55crq
	8164bc253bb76       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Exited              storage-provisioner       1                   e0fa1d3bcf36e       storage-provisioner
	db14f831bfbd1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   13 seconds ago      Exited              etcd                      1                   ffe735f12da12       etcd-kubernetes-upgrade-422555
	9fc1e0111145b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   13 seconds ago      Exited              kube-scheduler            1                   fda3e722abfca       kube-scheduler-kubernetes-upgrade-422555
	cbbc4dc7ede12       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   13 seconds ago      Exited              kube-apiserver            1                   189afda38c676       kube-apiserver-kubernetes-upgrade-422555
	195b10d99ccf2       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   13 seconds ago      Exited              kube-proxy                1                   e72d1f717497b       kube-proxy-5qldv
	98fa4ffbe11d0       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   13 seconds ago      Exited              kube-controller-manager   1                   ecce279e7d472       kube-controller-manager-kubernetes-upgrade-422555
	
	
	==> coredns [401bd5f1b1356cc0b688c669ff66d56ddbc369b5636c93af2333582f2fef8a01] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [4b1276f3768dac99871a420beaac9a7b2b64d315b56a983a6741d1ce57a32b2c] <==
	
	
	==> coredns [a4076491f219c0fc4dd01846b6caeb08840b8fde8cfe5b44839f4ed689f39110] <==
	
	
	==> coredns [c5cf7bf76c7e5ac19fff2aedfd71bf86edf05f53da9a6d48492376fe21b80306] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-422555
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-422555
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 17:23:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-422555
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 17:24:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 17:24:37 +0000   Wed, 14 Aug 2024 17:23:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 17:24:37 +0000   Wed, 14 Aug 2024 17:23:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 17:24:37 +0000   Wed, 14 Aug 2024 17:23:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 17:24:37 +0000   Wed, 14 Aug 2024 17:23:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.9
	  Hostname:    kubernetes-upgrade-422555
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 84aa5fa1fcfa4310baa525edd4545505
	  System UUID:                84aa5fa1-fcfa-4310-baa5-25edd4545505
	  Boot ID:                    59249962-55e6-4480-b032-b64d33344aed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-55crq                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     37s
	  kube-system                 coredns-6f6b679f8f-vc4k7                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     37s
	  kube-system                 etcd-kubernetes-upgrade-422555                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         40s
	  kube-system                 kube-apiserver-kubernetes-upgrade-422555             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-422555    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-proxy-5qldv                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 kube-scheduler-kubernetes-upgrade-422555             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 36s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 50s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  49s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  48s (x8 over 50s)  kubelet          Node kubernetes-upgrade-422555 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 50s)  kubelet          Node kubernetes-upgrade-422555 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x7 over 50s)  kubelet          Node kubernetes-upgrade-422555 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node kubernetes-upgrade-422555 event: Registered Node kubernetes-upgrade-422555 in Controller
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-422555 event: Registered Node kubernetes-upgrade-422555 in Controller
	
	
	==> dmesg <==
	[  +1.563947] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.521551] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.068566] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062377] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.165958] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.140892] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.275018] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +4.226216] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +1.931201] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +0.063653] kauditd_printk_skb: 158 callbacks suppressed
	[Aug14 17:24] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.084584] kauditd_printk_skb: 69 callbacks suppressed
	[ +24.193394] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.482757] systemd-fstab-generator[2423]: Ignoring "noauto" option for root device
	[  +0.374616] systemd-fstab-generator[2615]: Ignoring "noauto" option for root device
	[  +0.434251] systemd-fstab-generator[2756]: Ignoring "noauto" option for root device
	[  +0.236697] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.424466] systemd-fstab-generator[2924]: Ignoring "noauto" option for root device
	[  +1.242761] systemd-fstab-generator[3325]: Ignoring "noauto" option for root device
	[  +2.331037] systemd-fstab-generator[3920]: Ignoring "noauto" option for root device
	[  +0.126244] kauditd_printk_skb: 302 callbacks suppressed
	[  +5.205136] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.617943] systemd-fstab-generator[4462]: Ignoring "noauto" option for root device
	
	
	==> etcd [b4fcf2887a30abd5bcd196ceada9f61915297d583a25bce87c485ac8dd635a58] <==
	{"level":"info","ts":"2024-08-14T17:24:34.647404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eac31fd450a02dc switched to configuration voters=(7974804004021076700)"}
	{"level":"info","ts":"2024-08-14T17:24:34.648184Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8190a51c6b3ce3b6","local-member-id":"6eac31fd450a02dc","added-peer-id":"6eac31fd450a02dc","added-peer-peer-urls":["https://192.168.72.9:2380"]}
	{"level":"info","ts":"2024-08-14T17:24:34.648300Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8190a51c6b3ce3b6","local-member-id":"6eac31fd450a02dc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:24:34.648341Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:24:34.657604Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-14T17:24:34.657808Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.9:2380"}
	{"level":"info","ts":"2024-08-14T17:24:34.658047Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.9:2380"}
	{"level":"info","ts":"2024-08-14T17:24:34.659456Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6eac31fd450a02dc","initial-advertise-peer-urls":["https://192.168.72.9:2380"],"listen-peer-urls":["https://192.168.72.9:2380"],"advertise-client-urls":["https://192.168.72.9:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.9:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-14T17:24:34.659504Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T17:24:35.724730Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eac31fd450a02dc is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-14T17:24:35.724882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eac31fd450a02dc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-14T17:24:35.724951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eac31fd450a02dc received MsgPreVoteResp from 6eac31fd450a02dc at term 2"}
	{"level":"info","ts":"2024-08-14T17:24:35.724982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eac31fd450a02dc became candidate at term 3"}
	{"level":"info","ts":"2024-08-14T17:24:35.725009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eac31fd450a02dc received MsgVoteResp from 6eac31fd450a02dc at term 3"}
	{"level":"info","ts":"2024-08-14T17:24:35.725036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eac31fd450a02dc became leader at term 3"}
	{"level":"info","ts":"2024-08-14T17:24:35.725061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6eac31fd450a02dc elected leader 6eac31fd450a02dc at term 3"}
	{"level":"info","ts":"2024-08-14T17:24:35.730066Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6eac31fd450a02dc","local-member-attributes":"{Name:kubernetes-upgrade-422555 ClientURLs:[https://192.168.72.9:2379]}","request-path":"/0/members/6eac31fd450a02dc/attributes","cluster-id":"8190a51c6b3ce3b6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T17:24:35.730427Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T17:24:35.730917Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T17:24:35.731734Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:24:35.732701Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T17:24:35.732794Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T17:24:35.732833Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T17:24:35.733493Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:24:35.734319Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.9:2379"}
	
	
	==> etcd [db14f831bfbd10337e0dae3c6f3168a7bce0e557ffff0b6667458e25399f11e5] <==
	{"level":"info","ts":"2024-08-14T17:24:29.587179Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-14T17:24:29.683153Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"8190a51c6b3ce3b6","local-member-id":"6eac31fd450a02dc","commit-index":398}
	{"level":"info","ts":"2024-08-14T17:24:29.683330Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eac31fd450a02dc switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-14T17:24:29.683376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eac31fd450a02dc became follower at term 2"}
	{"level":"info","ts":"2024-08-14T17:24:29.683400Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6eac31fd450a02dc [peers: [], term: 2, commit: 398, applied: 0, lastindex: 398, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-14T17:24:29.698833Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-14T17:24:29.741641Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":387}
	{"level":"info","ts":"2024-08-14T17:24:29.800029Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-14T17:24:29.804536Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6eac31fd450a02dc","timeout":"7s"}
	{"level":"info","ts":"2024-08-14T17:24:29.810099Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6eac31fd450a02dc"}
	{"level":"info","ts":"2024-08-14T17:24:29.810203Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"6eac31fd450a02dc","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-14T17:24:29.811479Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:24:29.817517Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-14T17:24:29.815024Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-14T17:24:29.815499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6eac31fd450a02dc switched to configuration voters=(7974804004021076700)"}
	{"level":"info","ts":"2024-08-14T17:24:29.829896Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8190a51c6b3ce3b6","local-member-id":"6eac31fd450a02dc","added-peer-id":"6eac31fd450a02dc","added-peer-peer-urls":["https://192.168.72.9:2380"]}
	{"level":"info","ts":"2024-08-14T17:24:29.830155Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8190a51c6b3ce3b6","local-member-id":"6eac31fd450a02dc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:24:29.830200Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:24:29.834890Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.9:2380"}
	{"level":"info","ts":"2024-08-14T17:24:29.834915Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.9:2380"}
	{"level":"info","ts":"2024-08-14T17:24:29.835826Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6eac31fd450a02dc","initial-advertise-peer-urls":["https://192.168.72.9:2380"],"listen-peer-urls":["https://192.168.72.9:2380"],"advertise-client-urls":["https://192.168.72.9:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.9:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-14T17:24:29.835852Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T17:24:29.815578Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-14T17:24:29.835924Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-14T17:24:29.835935Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	
	
	==> kernel <==
	 17:24:42 up 1 min,  0 users,  load average: 2.18, 0.51, 0.17
	Linux kubernetes-upgrade-422555 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1951e6919c826a1df166b41e418a17c4612df4beba486b97203984dd6c9b03fa] <==
	I0814 17:24:37.306004       1 aggregator.go:171] initial CRD sync complete...
	I0814 17:24:37.306038       1 autoregister_controller.go:144] Starting autoregister controller
	I0814 17:24:37.306078       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0814 17:24:37.306114       1 cache.go:39] Caches are synced for autoregister controller
	I0814 17:24:37.316099       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0814 17:24:37.328011       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0814 17:24:37.328105       1 policy_source.go:224] refreshing policies
	I0814 17:24:37.377307       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0814 17:24:37.377664       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0814 17:24:37.378743       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0814 17:24:37.378809       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0814 17:24:37.379070       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0814 17:24:37.379498       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0814 17:24:37.381200       1 shared_informer.go:320] Caches are synced for configmaps
	I0814 17:24:37.386795       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0814 17:24:37.389160       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0814 17:24:37.390957       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0814 17:24:38.201559       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0814 17:24:38.323994       1 controller.go:615] quota admission added evaluator for: endpoints
	I0814 17:24:39.061635       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0814 17:24:39.086609       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0814 17:24:39.138875       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0814 17:24:39.210349       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0814 17:24:39.218541       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0814 17:24:40.750412       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [cbbc4dc7ede12090c04bd4b98f4d60b065ffca0b5358551fce8b05dea2bcc3c8] <==
	I0814 17:24:29.138707       1 options.go:228] external host was not specified, using 192.168.72.9
	I0814 17:24:29.155009       1 server.go:142] Version: v1.31.0
	I0814 17:24:29.178920       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-controller-manager [2693d4a0cf2b23a7a96b7f0ef6df79a74b757d103342a39a88b96306b19ee045] <==
	I0814 17:24:40.607045       1 shared_informer.go:320] Caches are synced for PV protection
	I0814 17:24:40.613833       1 shared_informer.go:320] Caches are synced for node
	I0814 17:24:40.613931       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0814 17:24:40.614006       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0814 17:24:40.614014       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0814 17:24:40.614022       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0814 17:24:40.614095       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-422555"
	I0814 17:24:40.623119       1 shared_informer.go:320] Caches are synced for expand
	I0814 17:24:40.623283       1 shared_informer.go:320] Caches are synced for persistent volume
	I0814 17:24:40.626926       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0814 17:24:40.630256       1 shared_informer.go:320] Caches are synced for ephemeral
	I0814 17:24:40.644108       1 shared_informer.go:320] Caches are synced for endpoint
	I0814 17:24:40.647683       1 shared_informer.go:320] Caches are synced for attach detach
	I0814 17:24:40.648398       1 shared_informer.go:320] Caches are synced for deployment
	I0814 17:24:40.650024       1 shared_informer.go:320] Caches are synced for cronjob
	I0814 17:24:40.652427       1 shared_informer.go:320] Caches are synced for namespace
	I0814 17:24:40.703066       1 shared_informer.go:320] Caches are synced for stateful set
	I0814 17:24:40.748967       1 shared_informer.go:320] Caches are synced for daemon sets
	I0814 17:24:40.805432       1 shared_informer.go:320] Caches are synced for resource quota
	I0814 17:24:40.830303       1 shared_informer.go:320] Caches are synced for resource quota
	I0814 17:24:41.020566       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="208.029366ms"
	I0814 17:24:41.022286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="76.308µs"
	I0814 17:24:41.254129       1 shared_informer.go:320] Caches are synced for garbage collector
	I0814 17:24:41.269440       1 shared_informer.go:320] Caches are synced for garbage collector
	I0814 17:24:41.269470       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [98fa4ffbe11d05bdb5744acdf6903b7f5044bc5b3e14ce6a1b2cb03f5da89674] <==
	
	
	==> kube-proxy [195b10d99ccf24e57618460daffb3e2215d2c8b0052b9a05e1f42123c6be3aaf] <==
	
	
	==> kube-proxy [d1015ce5b7da5c92f4f06c90311a1b50cb6fd8751ecd8ee72cb18cf8d74bb333] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 17:24:38.490422       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 17:24:38.502384       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.9"]
	E0814 17:24:38.502535       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 17:24:38.569335       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 17:24:38.569395       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 17:24:38.569432       1 server_linux.go:169] "Using iptables Proxier"
	I0814 17:24:38.573517       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 17:24:38.573997       1 server.go:483] "Version info" version="v1.31.0"
	I0814 17:24:38.574677       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:24:38.576491       1 config.go:197] "Starting service config controller"
	I0814 17:24:38.576925       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 17:24:38.577041       1 config.go:104] "Starting endpoint slice config controller"
	I0814 17:24:38.577065       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 17:24:38.577611       1 config.go:326] "Starting node config controller"
	I0814 17:24:38.577655       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 17:24:38.677283       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 17:24:38.677381       1 shared_informer.go:320] Caches are synced for service config
	I0814 17:24:38.677741       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7303697bc9e41dcfdbcba39cde662be1a0fce307142b53674f08cc6c8f2c7366] <==
	I0814 17:24:35.358617       1 serving.go:386] Generated self-signed cert in-memory
	W0814 17:24:37.220162       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0814 17:24:37.220214       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 17:24:37.220231       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 17:24:37.220242       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 17:24:37.304414       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0814 17:24:37.304450       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:24:37.308035       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 17:24:37.308063       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0814 17:24:37.308097       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 17:24:37.308452       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 17:24:37.409184       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9fc1e0111145bb3a1ca923c6099ca33acd5d158a4261fecbfb3c04fd2027c934] <==
	
	
	==> kubelet <==
	Aug 14 17:24:34 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:34.235973    3926 scope.go:117] "RemoveContainer" containerID="98fa4ffbe11d05bdb5744acdf6903b7f5044bc5b3e14ce6a1b2cb03f5da89674"
	Aug 14 17:24:34 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:34.238967    3926 scope.go:117] "RemoveContainer" containerID="9fc1e0111145bb3a1ca923c6099ca33acd5d158a4261fecbfb3c04fd2027c934"
	Aug 14 17:24:34 kubernetes-upgrade-422555 kubelet[3926]: E0814 17:24:34.316591    3926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-422555?timeout=10s\": dial tcp 192.168.72.9:8443: connect: connection refused" interval="800ms"
	Aug 14 17:24:34 kubernetes-upgrade-422555 kubelet[3926]: W0814 17:24:34.545048    3926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.72.9:8443: connect: connection refused
	Aug 14 17:24:34 kubernetes-upgrade-422555 kubelet[3926]: E0814 17:24:34.545134    3926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.72.9:8443: connect: connection refused" logger="UnhandledError"
	Aug 14 17:24:34 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:34.576726    3926 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-422555"
	Aug 14 17:24:34 kubernetes-upgrade-422555 kubelet[3926]: E0814 17:24:34.578129    3926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.9:8443: connect: connection refused" node="kubernetes-upgrade-422555"
	Aug 14 17:24:34 kubernetes-upgrade-422555 kubelet[3926]: W0814 17:24:34.591619    3926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.9:8443: connect: connection refused
	Aug 14 17:24:34 kubernetes-upgrade-422555 kubelet[3926]: E0814 17:24:34.591730    3926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.72.9:8443: connect: connection refused" logger="UnhandledError"
	Aug 14 17:24:35 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:35.379910    3926 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-422555"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.429276    3926 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-422555"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.429492    3926 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-422555"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.429576    3926 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.432056    3926 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.686452    3926 apiserver.go:52] "Watching apiserver"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.702159    3926 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.761238    3926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8537e4ae-f68a-49f0-bb3d-8acd56fe6eb9-lib-modules\") pod \"kube-proxy-5qldv\" (UID: \"8537e4ae-f68a-49f0-bb3d-8acd56fe6eb9\") " pod="kube-system/kube-proxy-5qldv"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.761414    3926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/faf4942d-2be2-45c0-93a3-0c70c6cf2ba4-tmp\") pod \"storage-provisioner\" (UID: \"faf4942d-2be2-45c0-93a3-0c70c6cf2ba4\") " pod="kube-system/storage-provisioner"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.761470    3926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8537e4ae-f68a-49f0-bb3d-8acd56fe6eb9-xtables-lock\") pod \"kube-proxy-5qldv\" (UID: \"8537e4ae-f68a-49f0-bb3d-8acd56fe6eb9\") " pod="kube-system/kube-proxy-5qldv"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.991492    3926 scope.go:117] "RemoveContainer" containerID="8164bc253bb765f0d14e46280cdd4a8693fdb105b791da276a408dc149243129"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.991982    3926 scope.go:117] "RemoveContainer" containerID="4b1276f3768dac99871a420beaac9a7b2b64d315b56a983a6741d1ce57a32b2c"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.992282    3926 scope.go:117] "RemoveContainer" containerID="195b10d99ccf24e57618460daffb3e2215d2c8b0052b9a05e1f42123c6be3aaf"
	Aug 14 17:24:37 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:37.992490    3926 scope.go:117] "RemoveContainer" containerID="a4076491f219c0fc4dd01846b6caeb08840b8fde8cfe5b44839f4ed689f39110"
	Aug 14 17:24:38 kubernetes-upgrade-422555 kubelet[3926]: E0814 17:24:38.083488    3926 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-422555\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-422555"
	Aug 14 17:24:40 kubernetes-upgrade-422555 kubelet[3926]: I0814 17:24:40.785528    3926 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [4cce40e799a3ab332c6716d49c6c9ffe0a58adb146c423390d4bd5a94e4671e5] <==
	I0814 17:24:38.274393       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 17:24:38.305660       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 17:24:38.305711       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 17:24:38.337194       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 17:24:38.338820       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b4f9ba67-07db-4e87-a0ab-975b3d5f5b77", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-422555_cf5317a9-f289-4dcb-842d-b917c8e57b28 became leader
	I0814 17:24:38.339140       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-422555_cf5317a9-f289-4dcb-842d-b917c8e57b28!
	I0814 17:24:38.441878       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-422555_cf5317a9-f289-4dcb-842d-b917c8e57b28!
	
	
	==> storage-provisioner [8164bc253bb765f0d14e46280cdd4a8693fdb105b791da276a408dc149243129] <==
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:24:41.106041   66421 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19446-13977/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-422555 -n kubernetes-upgrade-422555
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-422555 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-422555" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-422555
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-422555: (1.570852813s)
--- FAIL: TestKubernetesUpgrade (402.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (289.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-505584 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-505584 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m49.307749222s)

                                                
                                                
-- stdout --
	* [old-k8s-version-505584] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-505584" primary control-plane node in "old-k8s-version-505584" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 17:27:01.372592   72662 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:27:01.373175   72662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:27:01.373230   72662 out.go:304] Setting ErrFile to fd 2...
	I0814 17:27:01.373254   72662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:27:01.374024   72662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:27:01.374880   72662 out.go:298] Setting JSON to false
	I0814 17:27:01.376543   72662 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7765,"bootTime":1723648656,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:27:01.376656   72662 start.go:139] virtualization: kvm guest
	I0814 17:27:01.378984   72662 out.go:177] * [old-k8s-version-505584] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:27:01.380358   72662 notify.go:220] Checking for updates...
	I0814 17:27:01.380419   72662 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:27:01.381769   72662 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:27:01.383059   72662 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:27:01.384238   72662 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:27:01.385528   72662 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:27:01.386887   72662 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:27:01.388699   72662 config.go:182] Loaded profile config "bridge-984053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:27:01.388876   72662 config.go:182] Loaded profile config "calico-984053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:27:01.389000   72662 config.go:182] Loaded profile config "enable-default-cni-984053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:27:01.389137   72662 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:27:01.429422   72662 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 17:27:01.430793   72662 start.go:297] selected driver: kvm2
	I0814 17:27:01.430817   72662 start.go:901] validating driver "kvm2" against <nil>
	I0814 17:27:01.430835   72662 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:27:01.432034   72662 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:27:01.432169   72662 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:27:01.448763   72662 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:27:01.448811   72662 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 17:27:01.449044   72662 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:27:01.449119   72662 cni.go:84] Creating CNI manager for ""
	I0814 17:27:01.449136   72662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:27:01.449149   72662 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 17:27:01.449210   72662 start.go:340] cluster config:
	{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:27:01.449326   72662 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:27:01.451833   72662 out.go:177] * Starting "old-k8s-version-505584" primary control-plane node in "old-k8s-version-505584" cluster
	I0814 17:27:01.453140   72662 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:27:01.453188   72662 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:27:01.453197   72662 cache.go:56] Caching tarball of preloaded images
	I0814 17:27:01.453295   72662 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:27:01.453314   72662 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 17:27:01.453436   72662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:27:01.453465   72662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json: {Name:mk5825df71623402e58f07d15d62cbfb0fe0c43c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:27:01.453638   72662 start.go:360] acquireMachinesLock for old-k8s-version-505584: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:27:18.267773   72662 start.go:364] duration metric: took 16.814098845s to acquireMachinesLock for "old-k8s-version-505584"
	I0814 17:27:18.267851   72662 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:27:18.267958   72662 start.go:125] createHost starting for "" (driver="kvm2")
	I0814 17:27:18.269923   72662 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 17:27:18.270145   72662 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:27:18.270197   72662 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:27:18.291442   72662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39481
	I0814 17:27:18.292004   72662 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:27:18.292667   72662 main.go:141] libmachine: Using API Version  1
	I0814 17:27:18.292699   72662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:27:18.293083   72662 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:27:18.293296   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:27:18.293452   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:27:18.293650   72662 start.go:159] libmachine.API.Create for "old-k8s-version-505584" (driver="kvm2")
	I0814 17:27:18.293694   72662 client.go:168] LocalClient.Create starting
	I0814 17:27:18.293740   72662 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem
	I0814 17:27:18.293788   72662 main.go:141] libmachine: Decoding PEM data...
	I0814 17:27:18.293814   72662 main.go:141] libmachine: Parsing certificate...
	I0814 17:27:18.293882   72662 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem
	I0814 17:27:18.293910   72662 main.go:141] libmachine: Decoding PEM data...
	I0814 17:27:18.293929   72662 main.go:141] libmachine: Parsing certificate...
	I0814 17:27:18.293958   72662 main.go:141] libmachine: Running pre-create checks...
	I0814 17:27:18.293970   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .PreCreateCheck
	I0814 17:27:18.294372   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetConfigRaw
	I0814 17:27:18.294870   72662 main.go:141] libmachine: Creating machine...
	I0814 17:27:18.294889   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .Create
	I0814 17:27:18.295038   72662 main.go:141] libmachine: (old-k8s-version-505584) Creating KVM machine...
	I0814 17:27:18.296558   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found existing default KVM network
	I0814 17:27:18.298135   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:18.297967   72820 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:38:ba:c5} reservation:<nil>}
	I0814 17:27:18.299158   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:18.299072   72820 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:15:0b:d2} reservation:<nil>}
	I0814 17:27:18.299994   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:18.299904   72820 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:9d:42:61} reservation:<nil>}
	I0814 17:27:18.301100   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:18.301033   72820 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289aa0}
	I0814 17:27:18.301173   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | created network xml: 
	I0814 17:27:18.301196   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | <network>
	I0814 17:27:18.301211   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG |   <name>mk-old-k8s-version-505584</name>
	I0814 17:27:18.301225   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG |   <dns enable='no'/>
	I0814 17:27:18.301252   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG |   
	I0814 17:27:18.301276   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0814 17:27:18.301300   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG |     <dhcp>
	I0814 17:27:18.301310   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0814 17:27:18.301320   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG |     </dhcp>
	I0814 17:27:18.301333   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG |   </ip>
	I0814 17:27:18.301347   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG |   
	I0814 17:27:18.301358   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | </network>
	I0814 17:27:18.301373   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | 
	I0814 17:27:18.306478   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | trying to create private KVM network mk-old-k8s-version-505584 192.168.72.0/24...
	I0814 17:27:18.387745   72662 main.go:141] libmachine: (old-k8s-version-505584) Setting up store path in /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584 ...
	I0814 17:27:18.387782   72662 main.go:141] libmachine: (old-k8s-version-505584) Building disk image from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 17:27:18.387792   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | private KVM network mk-old-k8s-version-505584 192.168.72.0/24 created
	I0814 17:27:18.387809   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:18.387563   72820 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:27:18.387828   72662 main.go:141] libmachine: (old-k8s-version-505584) Downloading /home/jenkins/minikube-integration/19446-13977/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 17:27:18.656802   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:18.656683   72820 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa...
	I0814 17:27:18.963742   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:18.963604   72820 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/old-k8s-version-505584.rawdisk...
	I0814 17:27:18.963777   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Writing magic tar header
	I0814 17:27:18.963798   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Writing SSH key tar header
	I0814 17:27:18.963810   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:18.963779   72820 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584 ...
	I0814 17:27:18.963983   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584
	I0814 17:27:18.964052   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines
	I0814 17:27:18.964080   72662 main.go:141] libmachine: (old-k8s-version-505584) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584 (perms=drwx------)
	I0814 17:27:18.964106   72662 main.go:141] libmachine: (old-k8s-version-505584) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines (perms=drwxr-xr-x)
	I0814 17:27:18.964116   72662 main.go:141] libmachine: (old-k8s-version-505584) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube (perms=drwxr-xr-x)
	I0814 17:27:18.964135   72662 main.go:141] libmachine: (old-k8s-version-505584) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977 (perms=drwxrwxr-x)
	I0814 17:27:18.964149   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:27:18.964160   72662 main.go:141] libmachine: (old-k8s-version-505584) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 17:27:18.964174   72662 main.go:141] libmachine: (old-k8s-version-505584) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 17:27:18.964183   72662 main.go:141] libmachine: (old-k8s-version-505584) Creating domain...
	I0814 17:27:18.964199   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977
	I0814 17:27:18.964210   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 17:27:18.964220   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Checking permissions on dir: /home/jenkins
	I0814 17:27:18.964229   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Checking permissions on dir: /home
	I0814 17:27:18.964240   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Skipping /home - not owner
	I0814 17:27:18.965286   72662 main.go:141] libmachine: (old-k8s-version-505584) define libvirt domain using xml: 
	I0814 17:27:18.965310   72662 main.go:141] libmachine: (old-k8s-version-505584) <domain type='kvm'>
	I0814 17:27:18.965320   72662 main.go:141] libmachine: (old-k8s-version-505584)   <name>old-k8s-version-505584</name>
	I0814 17:27:18.965327   72662 main.go:141] libmachine: (old-k8s-version-505584)   <memory unit='MiB'>2200</memory>
	I0814 17:27:18.965336   72662 main.go:141] libmachine: (old-k8s-version-505584)   <vcpu>2</vcpu>
	I0814 17:27:18.965347   72662 main.go:141] libmachine: (old-k8s-version-505584)   <features>
	I0814 17:27:18.965362   72662 main.go:141] libmachine: (old-k8s-version-505584)     <acpi/>
	I0814 17:27:18.965373   72662 main.go:141] libmachine: (old-k8s-version-505584)     <apic/>
	I0814 17:27:18.965381   72662 main.go:141] libmachine: (old-k8s-version-505584)     <pae/>
	I0814 17:27:18.965389   72662 main.go:141] libmachine: (old-k8s-version-505584)     
	I0814 17:27:18.965407   72662 main.go:141] libmachine: (old-k8s-version-505584)   </features>
	I0814 17:27:18.965421   72662 main.go:141] libmachine: (old-k8s-version-505584)   <cpu mode='host-passthrough'>
	I0814 17:27:18.965444   72662 main.go:141] libmachine: (old-k8s-version-505584)   
	I0814 17:27:18.965455   72662 main.go:141] libmachine: (old-k8s-version-505584)   </cpu>
	I0814 17:27:18.965464   72662 main.go:141] libmachine: (old-k8s-version-505584)   <os>
	I0814 17:27:18.965475   72662 main.go:141] libmachine: (old-k8s-version-505584)     <type>hvm</type>
	I0814 17:27:18.965486   72662 main.go:141] libmachine: (old-k8s-version-505584)     <boot dev='cdrom'/>
	I0814 17:27:18.965496   72662 main.go:141] libmachine: (old-k8s-version-505584)     <boot dev='hd'/>
	I0814 17:27:18.965505   72662 main.go:141] libmachine: (old-k8s-version-505584)     <bootmenu enable='no'/>
	I0814 17:27:18.965515   72662 main.go:141] libmachine: (old-k8s-version-505584)   </os>
	I0814 17:27:18.965523   72662 main.go:141] libmachine: (old-k8s-version-505584)   <devices>
	I0814 17:27:18.965536   72662 main.go:141] libmachine: (old-k8s-version-505584)     <disk type='file' device='cdrom'>
	I0814 17:27:18.965556   72662 main.go:141] libmachine: (old-k8s-version-505584)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/boot2docker.iso'/>
	I0814 17:27:18.965567   72662 main.go:141] libmachine: (old-k8s-version-505584)       <target dev='hdc' bus='scsi'/>
	I0814 17:27:18.965576   72662 main.go:141] libmachine: (old-k8s-version-505584)       <readonly/>
	I0814 17:27:18.965587   72662 main.go:141] libmachine: (old-k8s-version-505584)     </disk>
	I0814 17:27:18.965597   72662 main.go:141] libmachine: (old-k8s-version-505584)     <disk type='file' device='disk'>
	I0814 17:27:18.965609   72662 main.go:141] libmachine: (old-k8s-version-505584)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 17:27:18.965626   72662 main.go:141] libmachine: (old-k8s-version-505584)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/old-k8s-version-505584.rawdisk'/>
	I0814 17:27:18.965636   72662 main.go:141] libmachine: (old-k8s-version-505584)       <target dev='hda' bus='virtio'/>
	I0814 17:27:18.965643   72662 main.go:141] libmachine: (old-k8s-version-505584)     </disk>
	I0814 17:27:18.965649   72662 main.go:141] libmachine: (old-k8s-version-505584)     <interface type='network'>
	I0814 17:27:18.965659   72662 main.go:141] libmachine: (old-k8s-version-505584)       <source network='mk-old-k8s-version-505584'/>
	I0814 17:27:18.965670   72662 main.go:141] libmachine: (old-k8s-version-505584)       <model type='virtio'/>
	I0814 17:27:18.965681   72662 main.go:141] libmachine: (old-k8s-version-505584)     </interface>
	I0814 17:27:18.965692   72662 main.go:141] libmachine: (old-k8s-version-505584)     <interface type='network'>
	I0814 17:27:18.965702   72662 main.go:141] libmachine: (old-k8s-version-505584)       <source network='default'/>
	I0814 17:27:18.965712   72662 main.go:141] libmachine: (old-k8s-version-505584)       <model type='virtio'/>
	I0814 17:27:18.965723   72662 main.go:141] libmachine: (old-k8s-version-505584)     </interface>
	I0814 17:27:18.965732   72662 main.go:141] libmachine: (old-k8s-version-505584)     <serial type='pty'>
	I0814 17:27:18.965739   72662 main.go:141] libmachine: (old-k8s-version-505584)       <target port='0'/>
	I0814 17:27:18.965746   72662 main.go:141] libmachine: (old-k8s-version-505584)     </serial>
	I0814 17:27:18.965756   72662 main.go:141] libmachine: (old-k8s-version-505584)     <console type='pty'>
	I0814 17:27:18.965767   72662 main.go:141] libmachine: (old-k8s-version-505584)       <target type='serial' port='0'/>
	I0814 17:27:18.965776   72662 main.go:141] libmachine: (old-k8s-version-505584)     </console>
	I0814 17:27:18.965787   72662 main.go:141] libmachine: (old-k8s-version-505584)     <rng model='virtio'>
	I0814 17:27:18.965797   72662 main.go:141] libmachine: (old-k8s-version-505584)       <backend model='random'>/dev/random</backend>
	I0814 17:27:18.965807   72662 main.go:141] libmachine: (old-k8s-version-505584)     </rng>
	I0814 17:27:18.965821   72662 main.go:141] libmachine: (old-k8s-version-505584)     
	I0814 17:27:18.965831   72662 main.go:141] libmachine: (old-k8s-version-505584)     
	I0814 17:27:18.965839   72662 main.go:141] libmachine: (old-k8s-version-505584)   </devices>
	I0814 17:27:18.965849   72662 main.go:141] libmachine: (old-k8s-version-505584) </domain>
	I0814 17:27:18.965860   72662 main.go:141] libmachine: (old-k8s-version-505584) 
	I0814 17:27:18.970010   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:76:9d:72 in network default
	I0814 17:27:18.970711   72662 main.go:141] libmachine: (old-k8s-version-505584) Ensuring networks are active...
	I0814 17:27:18.970740   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:18.971654   72662 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network default is active
	I0814 17:27:18.972163   72662 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network mk-old-k8s-version-505584 is active
	I0814 17:27:18.972807   72662 main.go:141] libmachine: (old-k8s-version-505584) Getting domain xml...
	I0814 17:27:18.973869   72662 main.go:141] libmachine: (old-k8s-version-505584) Creating domain...
	I0814 17:27:20.471683   72662 main.go:141] libmachine: (old-k8s-version-505584) Waiting to get IP...
	I0814 17:27:20.472800   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:20.473583   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:20.473610   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:20.473559   72820 retry.go:31] will retry after 209.46761ms: waiting for machine to come up
	I0814 17:27:20.685292   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:20.685828   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:20.685855   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:20.685784   72820 retry.go:31] will retry after 307.040998ms: waiting for machine to come up
	I0814 17:27:20.994455   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:20.995047   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:20.995072   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:20.995003   72820 retry.go:31] will retry after 362.36242ms: waiting for machine to come up
	I0814 17:27:21.358419   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:21.358981   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:21.359010   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:21.358934   72820 retry.go:31] will retry after 569.409262ms: waiting for machine to come up
	I0814 17:27:21.929869   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:21.930492   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:21.930514   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:21.930450   72820 retry.go:31] will retry after 584.091695ms: waiting for machine to come up
	I0814 17:27:22.516276   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:22.516830   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:22.516871   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:22.516774   72820 retry.go:31] will retry after 689.174354ms: waiting for machine to come up
	I0814 17:27:23.207930   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:23.208482   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:23.208513   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:23.208429   72820 retry.go:31] will retry after 856.068904ms: waiting for machine to come up
	I0814 17:27:24.065960   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:24.066594   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:24.066634   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:24.066527   72820 retry.go:31] will retry after 1.279495166s: waiting for machine to come up
	I0814 17:27:25.347395   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:25.347831   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:25.347855   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:25.347788   72820 retry.go:31] will retry after 1.213890144s: waiting for machine to come up
	I0814 17:27:26.563218   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:26.563838   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:26.563866   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:26.563783   72820 retry.go:31] will retry after 1.706844197s: waiting for machine to come up
	I0814 17:27:28.272622   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:28.273206   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:28.273233   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:28.273160   72820 retry.go:31] will retry after 2.407949465s: waiting for machine to come up
	I0814 17:27:30.683600   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:30.684229   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:30.684261   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:30.684183   72820 retry.go:31] will retry after 2.423354878s: waiting for machine to come up
	I0814 17:27:33.108976   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:33.109594   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:33.109616   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:33.109549   72820 retry.go:31] will retry after 4.282171923s: waiting for machine to come up
	I0814 17:27:37.393949   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:37.394446   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:27:37.394470   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:27:37.394403   72820 retry.go:31] will retry after 4.108790875s: waiting for machine to come up
	I0814 17:27:41.504324   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:41.504825   72662 main.go:141] libmachine: (old-k8s-version-505584) Found IP for machine: 192.168.72.49
	I0814 17:27:41.504839   72662 main.go:141] libmachine: (old-k8s-version-505584) Reserving static IP address...
	I0814 17:27:41.504877   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has current primary IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:41.505204   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"} in network mk-old-k8s-version-505584
	I0814 17:27:41.592230   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Getting to WaitForSSH function...
	I0814 17:27:41.592255   72662 main.go:141] libmachine: (old-k8s-version-505584) Reserved static IP address: 192.168.72.49
	I0814 17:27:41.592268   72662 main.go:141] libmachine: (old-k8s-version-505584) Waiting for SSH to be available...
	I0814 17:27:41.595820   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:41.596254   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:41.596280   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:41.596606   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH client type: external
	I0814 17:27:41.596644   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa (-rw-------)
	I0814 17:27:41.596674   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:27:41.596686   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | About to run SSH command:
	I0814 17:27:41.596701   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | exit 0
	I0814 17:27:41.746942   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | SSH cmd err, output: <nil>: 
	I0814 17:27:41.747462   72662 main.go:141] libmachine: (old-k8s-version-505584) KVM machine creation complete!
	I0814 17:27:41.747848   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetConfigRaw
	I0814 17:27:41.748355   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:27:41.748544   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:27:41.748726   72662 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 17:27:41.748745   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetState
	I0814 17:27:41.750186   72662 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 17:27:41.750203   72662 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 17:27:41.750211   72662 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 17:27:41.750220   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:27:41.754414   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:41.754866   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:41.754890   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:41.755136   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:27:41.755314   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:41.755496   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:41.755620   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:27:41.755810   72662 main.go:141] libmachine: Using SSH client type: native
	I0814 17:27:41.756067   72662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:27:41.756085   72662 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 17:27:41.882890   72662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:27:41.882925   72662 main.go:141] libmachine: Detecting the provisioner...
	I0814 17:27:41.882937   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:27:41.886427   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:41.886870   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:41.886939   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:41.887110   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:27:41.887348   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:41.887564   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:41.887737   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:27:41.887906   72662 main.go:141] libmachine: Using SSH client type: native
	I0814 17:27:41.888150   72662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:27:41.888167   72662 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 17:27:42.004868   72662 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 17:27:42.004959   72662 main.go:141] libmachine: found compatible host: buildroot
	I0814 17:27:42.004976   72662 main.go:141] libmachine: Provisioning with buildroot...
	I0814 17:27:42.004995   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:27:42.005276   72662 buildroot.go:166] provisioning hostname "old-k8s-version-505584"
	I0814 17:27:42.005302   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:27:42.005461   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:27:42.008765   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:42.009115   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:42.009145   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:42.009409   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:27:42.009584   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:42.009739   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:42.009874   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:27:42.010031   72662 main.go:141] libmachine: Using SSH client type: native
	I0814 17:27:42.010261   72662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:27:42.010282   72662 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-505584 && echo "old-k8s-version-505584" | sudo tee /etc/hostname
	I0814 17:27:42.139669   72662 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-505584
	
	I0814 17:27:42.139698   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:27:42.142968   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:42.143407   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:42.143430   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:42.143581   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:27:42.143764   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:42.143932   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:42.144111   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:27:42.144330   72662 main.go:141] libmachine: Using SSH client type: native
	I0814 17:27:42.144486   72662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:27:42.144506   72662 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-505584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-505584/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-505584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:27:42.267547   72662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:27:42.267581   72662 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:27:42.267617   72662 buildroot.go:174] setting up certificates
	I0814 17:27:42.267631   72662 provision.go:84] configureAuth start
	I0814 17:27:42.267645   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:27:42.267981   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:27:42.271232   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:42.271767   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:42.271799   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:42.272003   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:27:42.274920   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:42.275339   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:42.275379   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:42.275487   72662 provision.go:143] copyHostCerts
	I0814 17:27:42.275566   72662 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:27:42.275580   72662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:27:42.275640   72662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:27:42.275774   72662 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:27:42.275783   72662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:27:42.275815   72662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:27:42.275905   72662 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:27:42.275911   72662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:27:42.275941   72662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:27:42.276016   72662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-505584 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-505584]
	I0814 17:27:42.638488   72662 provision.go:177] copyRemoteCerts
	I0814 17:27:42.638570   72662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:27:42.638611   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:27:42.642140   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:42.642572   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:42.642610   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:42.642799   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:27:42.642949   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:42.643069   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:27:42.643191   72662 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:27:42.743972   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 17:27:42.774148   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:27:42.809662   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:27:42.846190   72662 provision.go:87] duration metric: took 578.544897ms to configureAuth
	I0814 17:27:42.846225   72662 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:27:42.846457   72662 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:27:42.846571   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:27:42.849673   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:42.850141   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:42.850168   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:42.850483   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:27:42.850701   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:42.850841   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:42.850949   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:27:42.851088   72662 main.go:141] libmachine: Using SSH client type: native
	I0814 17:27:42.851308   72662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:27:42.851348   72662 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:27:43.221573   72662 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:27:43.221605   72662 main.go:141] libmachine: Checking connection to Docker...
	I0814 17:27:43.221618   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetURL
	I0814 17:27:43.223186   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using libvirt version 6000000
	I0814 17:27:43.226379   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.226680   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:43.226693   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.226924   72662 main.go:141] libmachine: Docker is up and running!
	I0814 17:27:43.226932   72662 main.go:141] libmachine: Reticulating splines...
	I0814 17:27:43.226938   72662 client.go:171] duration metric: took 24.933233946s to LocalClient.Create
	I0814 17:27:43.226955   72662 start.go:167] duration metric: took 24.933308387s to libmachine.API.Create "old-k8s-version-505584"
	I0814 17:27:43.226961   72662 start.go:293] postStartSetup for "old-k8s-version-505584" (driver="kvm2")
	I0814 17:27:43.226971   72662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:27:43.226984   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:27:43.227517   72662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:27:43.227553   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:27:43.229928   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.230334   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:43.230346   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.230666   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:27:43.230806   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:43.230888   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:27:43.230952   72662 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:27:43.315267   72662 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:27:43.321222   72662 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:27:43.321246   72662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:27:43.321296   72662 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:27:43.321402   72662 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:27:43.321697   72662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:27:43.333298   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:27:43.361405   72662 start.go:296] duration metric: took 134.433297ms for postStartSetup
	I0814 17:27:43.361462   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetConfigRaw
	I0814 17:27:43.362004   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:27:43.365491   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.366293   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:43.366327   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.366477   72662 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:27:43.366716   72662 start.go:128] duration metric: took 25.098736237s to createHost
	I0814 17:27:43.366750   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:27:43.370488   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.370826   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:43.370849   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.371118   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:27:43.371310   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:43.371593   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:43.371775   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:27:43.371954   72662 main.go:141] libmachine: Using SSH client type: native
	I0814 17:27:43.372154   72662 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:27:43.372169   72662 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0814 17:27:43.489037   72662 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723656463.470142926
	
	I0814 17:27:43.489069   72662 fix.go:216] guest clock: 1723656463.470142926
	I0814 17:27:43.489079   72662 fix.go:229] Guest: 2024-08-14 17:27:43.470142926 +0000 UTC Remote: 2024-08-14 17:27:43.366729883 +0000 UTC m=+42.038792842 (delta=103.413043ms)
	I0814 17:27:43.489107   72662 fix.go:200] guest clock delta is within tolerance: 103.413043ms
	I0814 17:27:43.489115   72662 start.go:83] releasing machines lock for "old-k8s-version-505584", held for 25.221306516s
	I0814 17:27:43.489141   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:27:43.489438   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:27:43.492577   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.493203   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:43.493237   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.493405   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:27:43.494197   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:27:43.494389   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:27:43.494473   72662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:27:43.494536   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:27:43.494589   72662 ssh_runner.go:195] Run: cat /version.json
	I0814 17:27:43.494616   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:27:43.498541   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.498971   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:43.498986   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.499219   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:27:43.499380   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:43.499543   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.499590   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:27:43.499761   72662 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:27:43.499877   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:43.499902   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:43.500121   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:27:43.500249   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:27:43.500451   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:27:43.500593   72662 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:27:43.623112   72662 ssh_runner.go:195] Run: systemctl --version
	I0814 17:27:43.630083   72662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:27:43.791248   72662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:27:43.799830   72662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:27:43.799892   72662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:27:43.821668   72662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:27:43.821687   72662 start.go:495] detecting cgroup driver to use...
	I0814 17:27:43.821736   72662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:27:43.842093   72662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:27:43.859420   72662 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:27:43.859490   72662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:27:43.875756   72662 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:27:43.893568   72662 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:27:44.048915   72662 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:27:44.214972   72662 docker.go:233] disabling docker service ...
	I0814 17:27:44.215031   72662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:27:44.231855   72662 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:27:44.245877   72662 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:27:44.409250   72662 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:27:44.560155   72662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:27:44.576656   72662 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:27:44.596756   72662 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 17:27:44.596810   72662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:27:44.607419   72662 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:27:44.607481   72662 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:27:44.619146   72662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:27:44.630432   72662 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:27:44.641623   72662 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:27:44.656514   72662 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:27:44.667405   72662 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:27:44.667476   72662 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:27:44.683548   72662 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:27:44.699078   72662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:27:44.856459   72662 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:27:45.041992   72662 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:27:45.042063   72662 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:27:45.048595   72662 start.go:563] Will wait 60s for crictl version
	I0814 17:27:45.048653   72662 ssh_runner.go:195] Run: which crictl
	I0814 17:27:45.053064   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:27:45.107036   72662 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:27:45.107096   72662 ssh_runner.go:195] Run: crio --version
	I0814 17:27:45.142156   72662 ssh_runner.go:195] Run: crio --version
	I0814 17:27:45.180985   72662 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 17:27:45.182230   72662 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:27:45.185084   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:45.185608   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:27:34 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:27:45.185643   72662 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:27:45.185875   72662 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:27:45.190316   72662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:27:45.205920   72662 kubeadm.go:883] updating cluster {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:27:45.206056   72662 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:27:45.206111   72662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:27:45.251763   72662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:27:45.251840   72662 ssh_runner.go:195] Run: which lz4
	I0814 17:27:45.256741   72662 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0814 17:27:45.261510   72662 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:27:45.261544   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 17:27:47.075483   72662 crio.go:462] duration metric: took 1.818781979s to copy over tarball
	I0814 17:27:47.075545   72662 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:27:50.409110   72662 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.333542737s)
	I0814 17:27:50.409141   72662 crio.go:469] duration metric: took 3.333628406s to extract the tarball
	I0814 17:27:50.409148   72662 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:27:50.455649   72662 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:27:50.510857   72662 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:27:50.510884   72662 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:27:50.510948   72662 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:27:50.511391   72662 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:27:50.511410   72662 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:27:50.511449   72662 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 17:27:50.511485   72662 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:27:50.511513   72662 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:27:50.511554   72662 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:27:50.511661   72662 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 17:27:50.513319   72662 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 17:27:50.513393   72662 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:27:50.513650   72662 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 17:27:50.513320   72662 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:27:50.513747   72662 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:27:50.513851   72662 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:27:50.513948   72662 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:27:50.514436   72662 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:27:50.749149   72662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 17:27:50.793819   72662 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 17:27:50.793874   72662 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 17:27:50.793921   72662 ssh_runner.go:195] Run: which crictl
	I0814 17:27:50.798810   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:27:50.833842   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:27:50.834878   72662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:27:50.857205   72662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:27:50.869486   72662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:27:50.884244   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:27:50.900661   72662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 17:27:50.906054   72662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:27:50.907478   72662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 17:27:50.931537   72662 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 17:27:50.931590   72662 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:27:50.931645   72662 ssh_runner.go:195] Run: which crictl
	I0814 17:27:51.031533   72662 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 17:27:51.031572   72662 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:27:51.031615   72662 ssh_runner.go:195] Run: which crictl
	I0814 17:27:51.076296   72662 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 17:27:51.076316   72662 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 17:27:51.076359   72662 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:27:51.076405   72662 ssh_runner.go:195] Run: which crictl
	I0814 17:27:51.087121   72662 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 17:27:51.087153   72662 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:27:51.087187   72662 ssh_runner.go:195] Run: which crictl
	I0814 17:27:51.087249   72662 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 17:27:51.087264   72662 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:27:51.087279   72662 ssh_runner.go:195] Run: which crictl
	I0814 17:27:51.104630   72662 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 17:27:51.104671   72662 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 17:27:51.104716   72662 ssh_runner.go:195] Run: which crictl
	I0814 17:27:51.104777   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:27:51.104812   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:27:51.104844   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:27:51.104890   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:27:51.104919   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:27:51.225515   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:27:51.225644   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:27:51.225730   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:27:51.225856   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:27:51.225955   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:27:51.226035   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:27:51.376148   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:27:51.376198   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:27:51.376206   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:27:51.376266   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:27:51.376315   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:27:51.376363   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:27:51.423344   72662 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:27:51.527408   72662 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 17:27:51.527504   72662 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:27:51.527522   72662 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 17:27:51.527582   72662 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 17:27:51.527616   72662 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 17:27:51.527664   72662 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 17:27:51.649489   72662 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 17:27:51.649546   72662 cache_images.go:92] duration metric: took 1.138645268s to LoadCachedImages
	W0814 17:27:51.649623   72662 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 17:27:51.649651   72662 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0814 17:27:51.649783   72662 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-505584 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:27:51.649860   72662 ssh_runner.go:195] Run: crio config
	I0814 17:27:51.718011   72662 cni.go:84] Creating CNI manager for ""
	I0814 17:27:51.718031   72662 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:27:51.718040   72662 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:27:51.718057   72662 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-505584 NodeName:old-k8s-version-505584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 17:27:51.718183   72662 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-505584"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:27:51.718250   72662 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 17:27:51.729151   72662 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:27:51.729216   72662 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:27:51.738887   72662 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0814 17:27:51.754773   72662 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:27:51.772208   72662 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0814 17:27:51.789710   72662 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0814 17:27:51.794432   72662 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:27:51.806918   72662 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:27:51.949741   72662 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:27:51.968545   72662 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584 for IP: 192.168.72.49
	I0814 17:27:51.968574   72662 certs.go:194] generating shared ca certs ...
	I0814 17:27:51.968590   72662 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:27:51.968759   72662 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:27:51.968817   72662 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:27:51.968832   72662 certs.go:256] generating profile certs ...
	I0814 17:27:51.968903   72662 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.key
	I0814 17:27:51.968937   72662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.crt with IP's: []
	I0814 17:27:52.531417   72662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.crt ...
	I0814 17:27:52.531452   72662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.crt: {Name:mka6950cb2be8d34fbf4030d1c045410e382c254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:27:52.531618   72662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.key ...
	I0814 17:27:52.531634   72662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.key: {Name:mk6bbdccdbb0dee893da389d32a51b779afc9c13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:27:52.531718   72662 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key.c375770f
	I0814 17:27:52.531734   72662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt.c375770f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.49]
	I0814 17:27:52.805303   72662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt.c375770f ...
	I0814 17:27:52.805344   72662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt.c375770f: {Name:mk4a320eab7abe9d39da8dcb63f7846c2648c0f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:27:52.805586   72662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key.c375770f ...
	I0814 17:27:52.805605   72662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key.c375770f: {Name:mk6746c5cbc748dfc8e79501331852d509f9228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:27:52.805723   72662 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt.c375770f -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt
	I0814 17:27:52.805840   72662 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key.c375770f -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key
	I0814 17:27:52.805915   72662 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key
	I0814 17:27:52.805939   72662 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.crt with IP's: []
	I0814 17:27:52.982963   72662 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.crt ...
	I0814 17:27:52.982990   72662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.crt: {Name:mkd62ff6274f8cc03701ab851fce9c650c61b746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:27:52.983123   72662 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key ...
	I0814 17:27:52.983131   72662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key: {Name:mkbcaffff4bf559bddaae242509311c03ab01786 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:27:52.983273   72662 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:27:52.983304   72662 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:27:52.983311   72662 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:27:52.983377   72662 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:27:52.983408   72662 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:27:52.983430   72662 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:27:52.983469   72662 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:27:52.984093   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:27:53.017968   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:27:53.051822   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:27:53.074881   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:27:53.101409   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 17:27:53.127782   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:27:53.154298   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:27:53.177850   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:27:53.201585   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:27:53.224401   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:27:53.248737   72662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:27:53.276893   72662 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:27:53.299410   72662 ssh_runner.go:195] Run: openssl version
	I0814 17:27:53.306856   72662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:27:53.320077   72662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:27:53.324904   72662 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:27:53.324967   72662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:27:53.330897   72662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:27:53.342461   72662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:27:53.352865   72662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:27:53.357650   72662 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:27:53.357708   72662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:27:53.364231   72662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:27:53.376889   72662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:27:53.388282   72662 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:27:53.392495   72662 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:27:53.392564   72662 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:27:53.397926   72662 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:27:53.408229   72662 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:27:53.411990   72662 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 17:27:53.412054   72662 kubeadm.go:392] StartCluster: {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:27:53.412124   72662 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:27:53.412163   72662 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:27:53.460105   72662 cri.go:89] found id: ""
	I0814 17:27:53.460182   72662 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:27:53.472352   72662 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:27:53.486285   72662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:27:53.518476   72662 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:27:53.518502   72662 kubeadm.go:157] found existing configuration files:
	
	I0814 17:27:53.518573   72662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:27:53.530768   72662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:27:53.530833   72662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:27:53.541222   72662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:27:53.553223   72662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:27:53.553293   72662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:27:53.564999   72662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:27:53.583032   72662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:27:53.583114   72662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:27:53.598484   72662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:27:53.609365   72662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:27:53.609423   72662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:27:53.627847   72662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:27:53.794609   72662 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:27:53.794691   72662 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:27:53.973374   72662 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:27:53.973512   72662 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:27:53.973662   72662 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:27:54.157516   72662 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:27:54.159931   72662 out.go:204]   - Generating certificates and keys ...
	I0814 17:27:54.160028   72662 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:27:54.160120   72662 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:27:54.586225   72662 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 17:27:54.682159   72662 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 17:27:54.998263   72662 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 17:27:55.202222   72662 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 17:27:55.307668   72662 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 17:27:55.307931   72662 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-505584] and IPs [192.168.72.49 127.0.0.1 ::1]
	I0814 17:27:55.404115   72662 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 17:27:55.404421   72662 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-505584] and IPs [192.168.72.49 127.0.0.1 ::1]
	I0814 17:27:55.486494   72662 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 17:27:55.595705   72662 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 17:27:55.736109   72662 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 17:27:55.736546   72662 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:27:56.161714   72662 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:27:56.399073   72662 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:27:56.726851   72662 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:27:57.116751   72662 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:27:57.135674   72662 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:27:57.137146   72662 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:27:57.137361   72662 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:27:57.324060   72662 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:27:57.325988   72662 out.go:204]   - Booting up control plane ...
	I0814 17:27:57.326111   72662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:27:57.339831   72662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:27:57.343296   72662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:27:57.343420   72662 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:27:57.347265   72662 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:28:37.344974   72662 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:28:37.345148   72662 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:28:37.345430   72662 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:28:42.346185   72662 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:28:42.346379   72662 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:28:52.346771   72662 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:28:52.346978   72662 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:29:12.348151   72662 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:29:12.348426   72662 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:29:52.349053   72662 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:29:52.349473   72662 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:29:52.349502   72662 kubeadm.go:310] 
	I0814 17:29:52.349577   72662 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:29:52.349676   72662 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:29:52.349691   72662 kubeadm.go:310] 
	I0814 17:29:52.349776   72662 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:29:52.349860   72662 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:29:52.350129   72662 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:29:52.350158   72662 kubeadm.go:310] 
	I0814 17:29:52.350399   72662 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:29:52.350491   72662 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:29:52.350576   72662 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:29:52.350588   72662 kubeadm.go:310] 
	I0814 17:29:52.350847   72662 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:29:52.351001   72662 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:29:52.351015   72662 kubeadm.go:310] 
	I0814 17:29:52.351507   72662 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:29:52.351716   72662 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:29:52.351904   72662 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:29:52.352091   72662 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:29:52.352122   72662 kubeadm.go:310] 
	I0814 17:29:52.352503   72662 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:29:52.352618   72662 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:29:52.352791   72662 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 17:29:52.352845   72662 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-505584] and IPs [192.168.72.49 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-505584] and IPs [192.168.72.49 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-505584] and IPs [192.168.72.49 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-505584] and IPs [192.168.72.49 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 17:29:52.352893   72662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:29:53.347294   72662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:29:53.361259   72662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:29:53.370419   72662 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:29:53.370442   72662 kubeadm.go:157] found existing configuration files:
	
	I0814 17:29:53.370495   72662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:29:53.379060   72662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:29:53.379121   72662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:29:53.387615   72662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:29:53.396049   72662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:29:53.396099   72662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:29:53.404861   72662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:29:53.413527   72662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:29:53.413578   72662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:29:53.423650   72662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:29:53.432946   72662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:29:53.432995   72662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:29:53.442805   72662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:29:53.646573   72662 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:31:50.045667   72662 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:31:50.045790   72662 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 17:31:50.047209   72662 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:31:50.047277   72662 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:31:50.047402   72662 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:31:50.047535   72662 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:31:50.047662   72662 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:31:50.047760   72662 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:31:50.049488   72662 out.go:204]   - Generating certificates and keys ...
	I0814 17:31:50.049566   72662 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:31:50.049631   72662 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:31:50.049715   72662 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:31:50.049790   72662 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:31:50.049870   72662 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:31:50.049933   72662 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:31:50.050001   72662 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:31:50.050089   72662 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:31:50.050187   72662 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:31:50.050276   72662 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:31:50.050330   72662 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:31:50.050379   72662 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:31:50.050426   72662 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:31:50.050504   72662 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:31:50.050618   72662 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:31:50.050697   72662 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:31:50.050848   72662 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:31:50.050990   72662 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:31:50.051055   72662 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:31:50.051151   72662 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:31:50.052358   72662 out.go:204]   - Booting up control plane ...
	I0814 17:31:50.052441   72662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:31:50.052504   72662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:31:50.052569   72662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:31:50.052636   72662 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:31:50.052772   72662 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:31:50.052820   72662 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:31:50.052877   72662 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:31:50.053045   72662 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:31:50.053103   72662 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:31:50.053330   72662 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:31:50.053436   72662 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:31:50.053691   72662 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:31:50.053753   72662 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:31:50.053919   72662 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:31:50.053988   72662 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:31:50.054157   72662 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:31:50.054168   72662 kubeadm.go:310] 
	I0814 17:31:50.054204   72662 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:31:50.054238   72662 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:31:50.054251   72662 kubeadm.go:310] 
	I0814 17:31:50.054294   72662 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:31:50.054324   72662 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:31:50.054423   72662 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:31:50.054431   72662 kubeadm.go:310] 
	I0814 17:31:50.054518   72662 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:31:50.054547   72662 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:31:50.054576   72662 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:31:50.054582   72662 kubeadm.go:310] 
	I0814 17:31:50.054674   72662 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:31:50.054743   72662 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:31:50.054749   72662 kubeadm.go:310] 
	I0814 17:31:50.054836   72662 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:31:50.054924   72662 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:31:50.055010   72662 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:31:50.055075   72662 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:31:50.055088   72662 kubeadm.go:310] 
	I0814 17:31:50.055135   72662 kubeadm.go:394] duration metric: took 3m56.643087939s to StartCluster
	I0814 17:31:50.055173   72662 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:31:50.055228   72662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:31:50.093729   72662 cri.go:89] found id: ""
	I0814 17:31:50.093756   72662 logs.go:276] 0 containers: []
	W0814 17:31:50.093766   72662 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:31:50.093774   72662 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:31:50.093838   72662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:31:50.124777   72662 cri.go:89] found id: ""
	I0814 17:31:50.124814   72662 logs.go:276] 0 containers: []
	W0814 17:31:50.124825   72662 logs.go:278] No container was found matching "etcd"
	I0814 17:31:50.124833   72662 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:31:50.124889   72662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:31:50.156802   72662 cri.go:89] found id: ""
	I0814 17:31:50.156832   72662 logs.go:276] 0 containers: []
	W0814 17:31:50.156844   72662 logs.go:278] No container was found matching "coredns"
	I0814 17:31:50.156852   72662 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:31:50.156900   72662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:31:50.187764   72662 cri.go:89] found id: ""
	I0814 17:31:50.187797   72662 logs.go:276] 0 containers: []
	W0814 17:31:50.187809   72662 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:31:50.187817   72662 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:31:50.187867   72662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:31:50.219830   72662 cri.go:89] found id: ""
	I0814 17:31:50.219865   72662 logs.go:276] 0 containers: []
	W0814 17:31:50.219874   72662 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:31:50.219881   72662 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:31:50.219931   72662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:31:50.251107   72662 cri.go:89] found id: ""
	I0814 17:31:50.251133   72662 logs.go:276] 0 containers: []
	W0814 17:31:50.251143   72662 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:31:50.251149   72662 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:31:50.251199   72662 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:31:50.285694   72662 cri.go:89] found id: ""
	I0814 17:31:50.285730   72662 logs.go:276] 0 containers: []
	W0814 17:31:50.285740   72662 logs.go:278] No container was found matching "kindnet"
	I0814 17:31:50.285752   72662 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:31:50.285768   72662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:31:50.385313   72662 logs.go:123] Gathering logs for container status ...
	I0814 17:31:50.385359   72662 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:31:50.421906   72662 logs.go:123] Gathering logs for kubelet ...
	I0814 17:31:50.421939   72662 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:31:50.475561   72662 logs.go:123] Gathering logs for dmesg ...
	I0814 17:31:50.475597   72662 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:31:50.497635   72662 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:31:50.497676   72662 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:31:50.621268   72662 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0814 17:31:50.621311   72662 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 17:31:50.621351   72662 out.go:239] * 
	* 
	W0814 17:31:50.621402   72662 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:31:50.621424   72662 out.go:239] * 
	* 
	W0814 17:31:50.622258   72662 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 17:31:50.625018   72662 out.go:177] 
	W0814 17:31:50.626121   72662 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:31:50.626184   72662 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 17:31:50.626217   72662 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 17:31:50.627756   72662 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-505584 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 6 (214.524735ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:31:50.881895   79238 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-505584" does not appear in /home/jenkins/minikube-integration/19446-13977/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-505584" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (289.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-545149 --alsologtostderr -v=3
E0814 17:29:21.723118   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-545149 --alsologtostderr -v=3: exit status 82 (2m0.510873659s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-545149"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 17:29:20.304193   78282 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:29:20.304310   78282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:29:20.304320   78282 out.go:304] Setting ErrFile to fd 2...
	I0814 17:29:20.304327   78282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:29:20.304559   78282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:29:20.304848   78282 out.go:298] Setting JSON to false
	I0814 17:29:20.304948   78282 mustload.go:65] Loading cluster: no-preload-545149
	I0814 17:29:20.305296   78282 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:29:20.305379   78282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/config.json ...
	I0814 17:29:20.305568   78282 mustload.go:65] Loading cluster: no-preload-545149
	I0814 17:29:20.305687   78282 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:29:20.305720   78282 stop.go:39] StopHost: no-preload-545149
	I0814 17:29:20.306126   78282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:29:20.306180   78282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:29:20.321008   78282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0814 17:29:20.321489   78282 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:29:20.322086   78282 main.go:141] libmachine: Using API Version  1
	I0814 17:29:20.322111   78282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:29:20.322529   78282 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:29:20.324916   78282 out.go:177] * Stopping node "no-preload-545149"  ...
	I0814 17:29:20.326104   78282 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 17:29:20.326153   78282 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:29:20.326364   78282 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 17:29:20.326388   78282 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:29:20.329880   78282 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:29:20.330331   78282 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:28:09 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:29:20.330386   78282 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:29:20.330624   78282 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:29:20.330822   78282 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:29:20.331067   78282 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:29:20.331234   78282 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:29:20.429615   78282 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 17:29:20.500031   78282 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 17:29:20.568108   78282 main.go:141] libmachine: Stopping "no-preload-545149"...
	I0814 17:29:20.568192   78282 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:29:20.569953   78282 main.go:141] libmachine: (no-preload-545149) Calling .Stop
	I0814 17:29:20.574107   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 0/120
	I0814 17:29:21.575457   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 1/120
	I0814 17:29:22.576908   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 2/120
	I0814 17:29:23.578280   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 3/120
	I0814 17:29:24.579945   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 4/120
	I0814 17:29:25.582035   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 5/120
	I0814 17:29:26.583764   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 6/120
	I0814 17:29:27.585260   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 7/120
	I0814 17:29:28.587566   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 8/120
	I0814 17:29:29.589693   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 9/120
	I0814 17:29:30.591637   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 10/120
	I0814 17:29:31.593074   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 11/120
	I0814 17:29:32.594409   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 12/120
	I0814 17:29:33.595944   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 13/120
	I0814 17:29:34.597795   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 14/120
	I0814 17:29:35.599926   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 15/120
	I0814 17:29:36.601687   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 16/120
	I0814 17:29:37.602794   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 17/120
	I0814 17:29:38.605066   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 18/120
	I0814 17:29:39.606471   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 19/120
	I0814 17:29:40.609278   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 20/120
	I0814 17:29:41.610771   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 21/120
	I0814 17:29:42.611990   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 22/120
	I0814 17:29:43.613147   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 23/120
	I0814 17:29:44.615402   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 24/120
	I0814 17:29:45.617197   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 25/120
	I0814 17:29:46.618807   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 26/120
	I0814 17:29:47.620285   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 27/120
	I0814 17:29:48.621800   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 28/120
	I0814 17:29:49.623104   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 29/120
	I0814 17:29:50.625336   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 30/120
	I0814 17:29:51.626689   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 31/120
	I0814 17:29:52.628051   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 32/120
	I0814 17:29:53.629477   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 33/120
	I0814 17:29:54.631751   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 34/120
	I0814 17:29:55.633633   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 35/120
	I0814 17:29:56.635161   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 36/120
	I0814 17:29:57.636737   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 37/120
	I0814 17:29:58.638272   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 38/120
	I0814 17:29:59.639830   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 39/120
	I0814 17:30:00.641920   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 40/120
	I0814 17:30:01.643223   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 41/120
	I0814 17:30:02.644658   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 42/120
	I0814 17:30:03.645907   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 43/120
	I0814 17:30:04.647232   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 44/120
	I0814 17:30:05.648638   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 45/120
	I0814 17:30:06.650027   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 46/120
	I0814 17:30:07.651449   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 47/120
	I0814 17:30:08.652600   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 48/120
	I0814 17:30:09.654087   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 49/120
	I0814 17:30:10.656437   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 50/120
	I0814 17:30:11.657843   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 51/120
	I0814 17:30:12.659348   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 52/120
	I0814 17:30:13.660897   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 53/120
	I0814 17:30:14.662242   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 54/120
	I0814 17:30:15.664344   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 55/120
	I0814 17:30:16.665695   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 56/120
	I0814 17:30:17.667067   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 57/120
	I0814 17:30:18.668514   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 58/120
	I0814 17:30:19.669992   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 59/120
	I0814 17:30:20.672423   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 60/120
	I0814 17:30:21.673975   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 61/120
	I0814 17:30:22.675374   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 62/120
	I0814 17:30:23.676927   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 63/120
	I0814 17:30:24.678334   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 64/120
	I0814 17:30:25.680263   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 65/120
	I0814 17:30:26.681811   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 66/120
	I0814 17:30:27.683073   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 67/120
	I0814 17:30:28.684656   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 68/120
	I0814 17:30:29.685872   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 69/120
	I0814 17:30:30.688275   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 70/120
	I0814 17:30:31.689529   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 71/120
	I0814 17:30:32.691093   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 72/120
	I0814 17:30:33.692414   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 73/120
	I0814 17:30:34.693922   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 74/120
	I0814 17:30:35.696080   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 75/120
	I0814 17:30:36.697527   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 76/120
	I0814 17:30:37.699019   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 77/120
	I0814 17:30:38.700575   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 78/120
	I0814 17:30:39.701956   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 79/120
	I0814 17:30:40.703311   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 80/120
	I0814 17:30:41.704663   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 81/120
	I0814 17:30:42.706109   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 82/120
	I0814 17:30:43.707381   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 83/120
	I0814 17:30:44.708842   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 84/120
	I0814 17:30:45.710913   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 85/120
	I0814 17:30:46.712236   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 86/120
	I0814 17:30:47.713520   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 87/120
	I0814 17:30:48.714816   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 88/120
	I0814 17:30:49.716296   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 89/120
	I0814 17:30:50.718635   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 90/120
	I0814 17:30:51.720484   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 91/120
	I0814 17:30:52.721869   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 92/120
	I0814 17:30:53.723422   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 93/120
	I0814 17:30:54.724923   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 94/120
	I0814 17:30:55.726898   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 95/120
	I0814 17:30:56.728436   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 96/120
	I0814 17:30:57.729923   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 97/120
	I0814 17:30:58.731362   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 98/120
	I0814 17:30:59.732666   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 99/120
	I0814 17:31:00.735224   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 100/120
	I0814 17:31:01.736747   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 101/120
	I0814 17:31:02.738382   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 102/120
	I0814 17:31:03.739867   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 103/120
	I0814 17:31:04.742037   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 104/120
	I0814 17:31:05.743893   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 105/120
	I0814 17:31:06.745390   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 106/120
	I0814 17:31:07.746641   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 107/120
	I0814 17:31:08.748146   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 108/120
	I0814 17:31:09.749486   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 109/120
	I0814 17:31:10.751656   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 110/120
	I0814 17:31:11.752987   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 111/120
	I0814 17:31:12.754294   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 112/120
	I0814 17:31:13.755800   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 113/120
	I0814 17:31:14.757167   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 114/120
	I0814 17:31:15.759276   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 115/120
	I0814 17:31:16.760518   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 116/120
	I0814 17:31:17.761884   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 117/120
	I0814 17:31:18.763516   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 118/120
	I0814 17:31:19.765009   78282 main.go:141] libmachine: (no-preload-545149) Waiting for machine to stop 119/120
	I0814 17:31:20.765706   78282 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0814 17:31:20.765767   78282 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0814 17:31:20.767393   78282 out.go:177] 
	W0814 17:31:20.768618   78282 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0814 17:31:20.768633   78282 out.go:239] * 
	* 
	W0814 17:31:20.771140   78282 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 17:31:20.772398   78282 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-545149 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-545149 -n no-preload-545149
E0814 17:31:25.081051   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:25.087397   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:25.098729   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:25.120072   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:25.161479   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:25.242966   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:25.404763   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:25.726914   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:26.368839   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:27.650582   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:30.212767   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-545149 -n no-preload-545149: exit status 3 (18.486377027s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:31:39.259641   78993 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E0814 17:31:39.259684   78993 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-545149" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-309673 --alsologtostderr -v=3
E0814 17:29:37.086493   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:57.567813   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:58.429303   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:58.435735   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:58.447072   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:58.468484   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:58.509910   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:58.591443   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:58.753699   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:59.075468   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:59.717157   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:00.998642   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:03.560129   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:08.681981   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-309673 --alsologtostderr -v=3: exit status 82 (2m0.503030996s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-309673"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 17:29:33.792506   78432 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:29:33.792904   78432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:29:33.792951   78432 out.go:304] Setting ErrFile to fd 2...
	I0814 17:29:33.792969   78432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:29:33.793440   78432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:29:33.794072   78432 out.go:298] Setting JSON to false
	I0814 17:29:33.794192   78432 mustload.go:65] Loading cluster: embed-certs-309673
	I0814 17:29:33.794561   78432 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:29:33.794648   78432 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/config.json ...
	I0814 17:29:33.794842   78432 mustload.go:65] Loading cluster: embed-certs-309673
	I0814 17:29:33.794967   78432 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:29:33.795038   78432 stop.go:39] StopHost: embed-certs-309673
	I0814 17:29:33.795465   78432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:29:33.795512   78432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:29:33.809965   78432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45387
	I0814 17:29:33.810555   78432 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:29:33.811075   78432 main.go:141] libmachine: Using API Version  1
	I0814 17:29:33.811101   78432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:29:33.811481   78432 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:29:33.813685   78432 out.go:177] * Stopping node "embed-certs-309673"  ...
	I0814 17:29:33.814754   78432 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 17:29:33.814776   78432 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:29:33.814999   78432 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 17:29:33.815025   78432 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:29:33.817672   78432 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:29:33.818047   78432 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:28:37 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:29:33.818077   78432 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:29:33.818217   78432 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:29:33.818398   78432 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:29:33.818578   78432 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:29:33.818717   78432 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:29:33.908849   78432 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 17:29:33.976711   78432 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 17:29:34.045610   78432 main.go:141] libmachine: Stopping "embed-certs-309673"...
	I0814 17:29:34.045643   78432 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:29:34.047537   78432 main.go:141] libmachine: (embed-certs-309673) Calling .Stop
	I0814 17:29:34.051136   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 0/120
	I0814 17:29:35.053191   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 1/120
	I0814 17:29:36.054594   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 2/120
	I0814 17:29:37.056176   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 3/120
	I0814 17:29:38.057562   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 4/120
	I0814 17:29:39.059881   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 5/120
	I0814 17:29:40.061328   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 6/120
	I0814 17:29:41.062880   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 7/120
	I0814 17:29:42.064239   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 8/120
	I0814 17:29:43.065648   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 9/120
	I0814 17:29:44.067985   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 10/120
	I0814 17:29:45.069160   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 11/120
	I0814 17:29:46.070467   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 12/120
	I0814 17:29:47.071856   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 13/120
	I0814 17:29:48.073812   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 14/120
	I0814 17:29:49.075689   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 15/120
	I0814 17:29:50.077840   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 16/120
	I0814 17:29:51.079828   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 17/120
	I0814 17:29:52.081308   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 18/120
	I0814 17:29:53.082848   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 19/120
	I0814 17:29:54.084937   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 20/120
	I0814 17:29:55.086303   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 21/120
	I0814 17:29:56.087709   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 22/120
	I0814 17:29:57.089339   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 23/120
	I0814 17:29:58.090676   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 24/120
	I0814 17:29:59.092736   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 25/120
	I0814 17:30:00.094074   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 26/120
	I0814 17:30:01.095676   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 27/120
	I0814 17:30:02.097898   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 28/120
	I0814 17:30:03.099427   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 29/120
	I0814 17:30:04.101913   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 30/120
	I0814 17:30:05.104324   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 31/120
	I0814 17:30:06.105720   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 32/120
	I0814 17:30:07.107131   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 33/120
	I0814 17:30:08.109009   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 34/120
	I0814 17:30:09.110900   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 35/120
	I0814 17:30:10.112313   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 36/120
	I0814 17:30:11.114159   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 37/120
	I0814 17:30:12.115682   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 38/120
	I0814 17:30:13.117801   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 39/120
	I0814 17:30:14.120407   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 40/120
	I0814 17:30:15.121939   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 41/120
	I0814 17:30:16.123226   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 42/120
	I0814 17:30:17.124666   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 43/120
	I0814 17:30:18.126901   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 44/120
	I0814 17:30:19.129111   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 45/120
	I0814 17:30:20.131637   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 46/120
	I0814 17:30:21.133301   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 47/120
	I0814 17:30:22.134703   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 48/120
	I0814 17:30:23.136449   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 49/120
	I0814 17:30:24.138732   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 50/120
	I0814 17:30:25.140552   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 51/120
	I0814 17:30:26.141957   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 52/120
	I0814 17:30:27.143270   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 53/120
	I0814 17:30:28.144920   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 54/120
	I0814 17:30:29.146918   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 55/120
	I0814 17:30:30.148122   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 56/120
	I0814 17:30:31.149646   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 57/120
	I0814 17:30:32.151107   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 58/120
	I0814 17:30:33.153005   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 59/120
	I0814 17:30:34.154764   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 60/120
	I0814 17:30:35.156230   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 61/120
	I0814 17:30:36.157605   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 62/120
	I0814 17:30:37.159068   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 63/120
	I0814 17:30:38.160746   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 64/120
	I0814 17:30:39.162421   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 65/120
	I0814 17:30:40.163926   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 66/120
	I0814 17:30:41.165464   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 67/120
	I0814 17:30:42.167218   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 68/120
	I0814 17:30:43.168779   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 69/120
	I0814 17:30:44.170916   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 70/120
	I0814 17:30:45.172217   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 71/120
	I0814 17:30:46.173544   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 72/120
	I0814 17:30:47.174702   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 73/120
	I0814 17:30:48.175986   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 74/120
	I0814 17:30:49.177993   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 75/120
	I0814 17:30:50.179575   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 76/120
	I0814 17:30:51.181277   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 77/120
	I0814 17:30:52.183017   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 78/120
	I0814 17:30:53.184573   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 79/120
	I0814 17:30:54.186937   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 80/120
	I0814 17:30:55.188410   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 81/120
	I0814 17:30:56.189885   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 82/120
	I0814 17:30:57.191380   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 83/120
	I0814 17:30:58.192988   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 84/120
	I0814 17:30:59.194917   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 85/120
	I0814 17:31:00.196815   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 86/120
	I0814 17:31:01.198144   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 87/120
	I0814 17:31:02.199698   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 88/120
	I0814 17:31:03.201144   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 89/120
	I0814 17:31:04.202720   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 90/120
	I0814 17:31:05.204185   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 91/120
	I0814 17:31:06.205721   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 92/120
	I0814 17:31:07.207370   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 93/120
	I0814 17:31:08.209147   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 94/120
	I0814 17:31:09.211217   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 95/120
	I0814 17:31:10.212868   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 96/120
	I0814 17:31:11.214186   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 97/120
	I0814 17:31:12.215687   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 98/120
	I0814 17:31:13.217075   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 99/120
	I0814 17:31:14.219364   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 100/120
	I0814 17:31:15.220732   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 101/120
	I0814 17:31:16.222158   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 102/120
	I0814 17:31:17.223636   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 103/120
	I0814 17:31:18.225012   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 104/120
	I0814 17:31:19.227006   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 105/120
	I0814 17:31:20.228418   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 106/120
	I0814 17:31:21.229841   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 107/120
	I0814 17:31:22.231424   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 108/120
	I0814 17:31:23.232901   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 109/120
	I0814 17:31:24.235047   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 110/120
	I0814 17:31:25.236698   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 111/120
	I0814 17:31:26.238170   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 112/120
	I0814 17:31:27.239663   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 113/120
	I0814 17:31:28.240989   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 114/120
	I0814 17:31:29.242966   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 115/120
	I0814 17:31:30.244380   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 116/120
	I0814 17:31:31.245647   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 117/120
	I0814 17:31:32.247335   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 118/120
	I0814 17:31:33.248880   78432 main.go:141] libmachine: (embed-certs-309673) Waiting for machine to stop 119/120
	I0814 17:31:34.249644   78432 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0814 17:31:34.249711   78432 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0814 17:31:34.251516   78432 out.go:177] 
	W0814 17:31:34.252824   78432 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0814 17:31:34.252840   78432 out.go:239] * 
	* 
	W0814 17:31:34.255423   78432 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 17:31:34.256653   78432 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-309673 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-309673 -n embed-certs-309673
E0814 17:31:35.334273   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:36.260215   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-309673 -n embed-certs-309673: exit status 3 (18.569715659s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:31:52.827650   79065 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.2:22: connect: no route to host
	E0814 17:31:52.827668   79065 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.2:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-309673" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-885666 --alsologtostderr -v=3
E0814 17:30:38.529287   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:39.405974   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:55.282987   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:55.289462   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:55.300838   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:55.322187   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:55.363597   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:55.445181   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:55.606661   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:55.928667   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:56.570761   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:30:57.852355   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:00.414359   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:05.536613   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:15.778667   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:31:20.367885   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-885666 --alsologtostderr -v=3: exit status 82 (2m0.489027304s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-885666"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 17:30:24.141995   78745 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:30:24.142251   78745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:30:24.142261   78745 out.go:304] Setting ErrFile to fd 2...
	I0814 17:30:24.142265   78745 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:30:24.142461   78745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:30:24.142697   78745 out.go:298] Setting JSON to false
	I0814 17:30:24.142772   78745 mustload.go:65] Loading cluster: default-k8s-diff-port-885666
	I0814 17:30:24.143101   78745 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:30:24.143167   78745 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/config.json ...
	I0814 17:30:24.143368   78745 mustload.go:65] Loading cluster: default-k8s-diff-port-885666
	I0814 17:30:24.143488   78745 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:30:24.143514   78745 stop.go:39] StopHost: default-k8s-diff-port-885666
	I0814 17:30:24.143875   78745 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:30:24.143925   78745 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:30:24.159378   78745 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41251
	I0814 17:30:24.159824   78745 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:30:24.160342   78745 main.go:141] libmachine: Using API Version  1
	I0814 17:30:24.160362   78745 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:30:24.160741   78745 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:30:24.163050   78745 out.go:177] * Stopping node "default-k8s-diff-port-885666"  ...
	I0814 17:30:24.164681   78745 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0814 17:30:24.164726   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:30:24.165052   78745 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0814 17:30:24.165083   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:30:24.168147   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:30:24.168613   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:29:05 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:30:24.168638   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:30:24.168815   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:30:24.168977   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:30:24.169141   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:30:24.169310   78745 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:30:24.261487   78745 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0814 17:30:24.327643   78745 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0814 17:30:24.384195   78745 main.go:141] libmachine: Stopping "default-k8s-diff-port-885666"...
	I0814 17:30:24.384226   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:30:24.385806   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Stop
	I0814 17:30:24.389654   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 0/120
	I0814 17:30:25.390884   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 1/120
	I0814 17:30:26.392329   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 2/120
	I0814 17:30:27.393785   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 3/120
	I0814 17:30:28.395243   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 4/120
	I0814 17:30:29.397476   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 5/120
	I0814 17:30:30.399170   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 6/120
	I0814 17:30:31.400540   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 7/120
	I0814 17:30:32.402116   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 8/120
	I0814 17:30:33.403985   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 9/120
	I0814 17:30:34.406666   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 10/120
	I0814 17:30:35.408207   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 11/120
	I0814 17:30:36.409762   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 12/120
	I0814 17:30:37.411230   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 13/120
	I0814 17:30:38.412750   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 14/120
	I0814 17:30:39.414618   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 15/120
	I0814 17:30:40.416161   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 16/120
	I0814 17:30:41.417529   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 17/120
	I0814 17:30:42.419019   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 18/120
	I0814 17:30:43.420573   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 19/120
	I0814 17:30:44.421820   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 20/120
	I0814 17:30:45.423286   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 21/120
	I0814 17:30:46.424822   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 22/120
	I0814 17:30:47.426257   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 23/120
	I0814 17:30:48.427720   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 24/120
	I0814 17:30:49.429804   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 25/120
	I0814 17:30:50.431362   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 26/120
	I0814 17:30:51.433077   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 27/120
	I0814 17:30:52.434407   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 28/120
	I0814 17:30:53.435981   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 29/120
	I0814 17:30:54.438342   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 30/120
	I0814 17:30:55.439899   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 31/120
	I0814 17:30:56.441378   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 32/120
	I0814 17:30:57.442847   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 33/120
	I0814 17:30:58.444243   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 34/120
	I0814 17:30:59.446397   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 35/120
	I0814 17:31:00.448095   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 36/120
	I0814 17:31:01.449986   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 37/120
	I0814 17:31:02.451561   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 38/120
	I0814 17:31:03.452961   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 39/120
	I0814 17:31:04.455638   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 40/120
	I0814 17:31:05.457217   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 41/120
	I0814 17:31:06.458663   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 42/120
	I0814 17:31:07.460004   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 43/120
	I0814 17:31:08.461816   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 44/120
	I0814 17:31:09.464129   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 45/120
	I0814 17:31:10.465700   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 46/120
	I0814 17:31:11.467069   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 47/120
	I0814 17:31:12.468320   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 48/120
	I0814 17:31:13.469645   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 49/120
	I0814 17:31:14.471899   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 50/120
	I0814 17:31:15.473087   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 51/120
	I0814 17:31:16.474396   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 52/120
	I0814 17:31:17.475745   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 53/120
	I0814 17:31:18.477117   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 54/120
	I0814 17:31:19.479184   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 55/120
	I0814 17:31:20.480451   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 56/120
	I0814 17:31:21.481949   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 57/120
	I0814 17:31:22.483269   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 58/120
	I0814 17:31:23.484767   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 59/120
	I0814 17:31:24.487125   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 60/120
	I0814 17:31:25.488387   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 61/120
	I0814 17:31:26.489803   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 62/120
	I0814 17:31:27.491114   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 63/120
	I0814 17:31:28.492465   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 64/120
	I0814 17:31:29.494525   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 65/120
	I0814 17:31:30.495962   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 66/120
	I0814 17:31:31.497490   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 67/120
	I0814 17:31:32.499020   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 68/120
	I0814 17:31:33.500444   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 69/120
	I0814 17:31:34.501738   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 70/120
	I0814 17:31:35.503110   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 71/120
	I0814 17:31:36.504677   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 72/120
	I0814 17:31:37.506065   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 73/120
	I0814 17:31:38.507353   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 74/120
	I0814 17:31:39.509489   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 75/120
	I0814 17:31:40.511052   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 76/120
	I0814 17:31:41.512784   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 77/120
	I0814 17:31:42.513870   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 78/120
	I0814 17:31:43.515367   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 79/120
	I0814 17:31:44.517687   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 80/120
	I0814 17:31:45.518654   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 81/120
	I0814 17:31:46.520105   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 82/120
	I0814 17:31:47.521689   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 83/120
	I0814 17:31:48.523151   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 84/120
	I0814 17:31:49.525585   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 85/120
	I0814 17:31:50.527019   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 86/120
	I0814 17:31:51.529173   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 87/120
	I0814 17:31:52.531651   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 88/120
	I0814 17:31:53.533197   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 89/120
	I0814 17:31:54.535758   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 90/120
	I0814 17:31:55.537348   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 91/120
	I0814 17:31:56.538656   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 92/120
	I0814 17:31:57.539949   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 93/120
	I0814 17:31:58.541505   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 94/120
	I0814 17:31:59.543716   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 95/120
	I0814 17:32:00.545112   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 96/120
	I0814 17:32:01.546692   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 97/120
	I0814 17:32:02.548131   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 98/120
	I0814 17:32:03.549512   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 99/120
	I0814 17:32:04.551902   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 100/120
	I0814 17:32:05.553453   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 101/120
	I0814 17:32:06.555067   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 102/120
	I0814 17:32:07.556416   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 103/120
	I0814 17:32:08.557775   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 104/120
	I0814 17:32:09.560228   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 105/120
	I0814 17:32:10.561870   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 106/120
	I0814 17:32:11.563457   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 107/120
	I0814 17:32:12.565069   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 108/120
	I0814 17:32:13.566567   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 109/120
	I0814 17:32:14.568068   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 110/120
	I0814 17:32:15.569627   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 111/120
	I0814 17:32:16.570973   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 112/120
	I0814 17:32:17.572319   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 113/120
	I0814 17:32:18.573917   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 114/120
	I0814 17:32:19.576365   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 115/120
	I0814 17:32:20.577933   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 116/120
	I0814 17:32:21.579201   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 117/120
	I0814 17:32:22.580534   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 118/120
	I0814 17:32:23.582334   78745 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for machine to stop 119/120
	I0814 17:32:24.583641   78745 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0814 17:32:24.583689   78745 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0814 17:32:24.585376   78745 out.go:177] 
	W0814 17:32:24.586596   78745 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0814 17:32:24.586613   78745 out.go:239] * 
	* 
	W0814 17:32:24.589201   78745 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 17:32:24.590381   78745 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-885666 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666
E0814 17:32:24.737389   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:27.127499   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:29.859286   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:32.249282   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:40.101562   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:42.289502   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:42.491182   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666: exit status 3 (18.668082088s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:32:43.259756   79642 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host
	E0814 17:32:43.259782   79642 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-885666" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-545149 -n no-preload-545149
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-545149 -n no-preload-545149: exit status 3 (3.167784913s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:31:42.427679   79111 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E0814 17:31:42.427708   79111 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-545149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0814 17:31:45.575975   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-545149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151962315s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-545149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-545149 -n no-preload-545149
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-545149 -n no-preload-545149: exit status 3 (3.063778621s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:31:51.643682   79190 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E0814 17:31:51.643704   79190 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-545149" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-505584 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-505584 create -f testdata/busybox.yaml: exit status 1 (41.84083ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-505584" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-505584 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 6 (209.078971ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:31:51.134404   79277 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-505584" does not appear in /home/jenkins/minikube-integration/19446-13977/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-505584" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 6 (211.071134ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:31:51.345207   79307 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-505584" does not appear in /home/jenkins/minikube-integration/19446-13977/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-505584" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (113.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-505584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-505584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m53.196481038s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-505584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-505584 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-505584 describe deploy/metrics-server -n kube-system: exit status 1 (43.208648ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-505584" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-505584 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 6 (223.415365ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:33:44.796694   80099 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-505584" does not appear in /home/jenkins/minikube-integration/19446-13977/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-505584" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (113.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-309673 -n embed-certs-309673
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-309673 -n embed-certs-309673: exit status 3 (3.167960699s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:31:55.995706   79408 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.2:22: connect: no route to host
	E0814 17:31:55.995730   79408 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.2:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-309673 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0814 17:32:00.451044   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-309673 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152626173s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.2:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-309673 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-309673 -n embed-certs-309673
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-309673 -n embed-certs-309673: exit status 3 (3.063308202s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:32:05.211722   79491 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.2:22: connect: no route to host
	E0814 17:32:05.211750   79491 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.2:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-309673" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666: exit status 3 (3.167596461s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:32:46.427625   79754 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host
	E0814 17:32:46.427649   79754 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-885666 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0814 17:32:47.019489   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-885666 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152542755s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-885666 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666: exit status 3 (3.063325869s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0814 17:32:55.643734   79841 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host
	E0814 17:32:55.643755   79841 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.184:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-885666" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (717.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-505584 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0814 17:33:54.842620   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:34:08.941369   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:34:16.591011   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:34:29.459649   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:34:35.804953   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:34:44.293337   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:34:58.429803   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:35:03.467309   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:35:05.856320   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:35:26.131397   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:35:52.532449   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:35:55.282313   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:35:57.726576   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:36:22.986013   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:36:25.080948   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:36:52.783579   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:37:19.605804   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:37:21.996966   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:37:47.308661   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:37:49.698579   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:38:02.588756   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:38:13.865492   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:38:41.568861   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:39:16.591874   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:39:29.459852   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:39:58.429028   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:40:55.282885   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:41:25.080790   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-505584 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m54.41070977s)

                                                
                                                
-- stdout --
	* [old-k8s-version-505584] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-505584" primary control-plane node in "old-k8s-version-505584" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-505584" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 17:33:46.321266   80228 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:33:46.321519   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321529   80228 out.go:304] Setting ErrFile to fd 2...
	I0814 17:33:46.321533   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321691   80228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:33:46.322185   80228 out.go:298] Setting JSON to false
	I0814 17:33:46.323102   80228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8170,"bootTime":1723648656,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:33:46.323161   80228 start.go:139] virtualization: kvm guest
	I0814 17:33:46.325361   80228 out.go:177] * [old-k8s-version-505584] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:33:46.326668   80228 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:33:46.326679   80228 notify.go:220] Checking for updates...
	I0814 17:33:46.329217   80228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:33:46.330813   80228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:33:46.332019   80228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:33:46.333264   80228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:33:46.334480   80228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:33:46.336108   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:33:46.336521   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.336564   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.351154   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0814 17:33:46.351563   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.352042   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.352061   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.352395   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.352567   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.354248   80228 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 17:33:46.355547   80228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:33:46.355834   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.355865   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.370976   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
	I0814 17:33:46.371452   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.371977   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.372008   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.372376   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.372624   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.407797   80228 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 17:33:46.408905   80228 start.go:297] selected driver: kvm2
	I0814 17:33:46.408918   80228 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.409022   80228 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:33:46.409677   80228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.409753   80228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:33:46.424801   80228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:33:46.425288   80228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:33:46.425338   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:33:46.425349   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:33:46.425396   80228 start.go:340] cluster config:
	{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.425518   80228 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.427224   80228 out.go:177] * Starting "old-k8s-version-505584" primary control-plane node in "old-k8s-version-505584" cluster
	I0814 17:33:46.428485   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:33:46.428516   80228 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:33:46.428523   80228 cache.go:56] Caching tarball of preloaded images
	I0814 17:33:46.428589   80228 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:33:46.428600   80228 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 17:33:46.428727   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:33:46.428899   80228 start.go:360] acquireMachinesLock for old-k8s-version-505584: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:37:09.324026   80228 start.go:364] duration metric: took 3m22.895078586s to acquireMachinesLock for "old-k8s-version-505584"
	I0814 17:37:09.324085   80228 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:09.324101   80228 fix.go:54] fixHost starting: 
	I0814 17:37:09.324533   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:09.324575   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:09.344085   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
	I0814 17:37:09.344490   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:09.344980   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:37:09.345006   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:09.345416   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:09.345674   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:09.345842   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetState
	I0814 17:37:09.347489   80228 fix.go:112] recreateIfNeeded on old-k8s-version-505584: state=Stopped err=<nil>
	I0814 17:37:09.347511   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	W0814 17:37:09.347696   80228 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:09.349747   80228 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-505584" ...
	I0814 17:37:09.351136   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .Start
	I0814 17:37:09.351338   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring networks are active...
	I0814 17:37:09.352075   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network default is active
	I0814 17:37:09.352333   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network mk-old-k8s-version-505584 is active
	I0814 17:37:09.352701   80228 main.go:141] libmachine: (old-k8s-version-505584) Getting domain xml...
	I0814 17:37:09.353363   80228 main.go:141] libmachine: (old-k8s-version-505584) Creating domain...
	I0814 17:37:10.664390   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting to get IP...
	I0814 17:37:10.665484   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.665915   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.665980   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.665888   81116 retry.go:31] will retry after 285.047327ms: waiting for machine to come up
	I0814 17:37:10.952552   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.953009   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.953036   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.952973   81116 retry.go:31] will retry after 281.728141ms: waiting for machine to come up
	I0814 17:37:11.236576   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.237153   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.237192   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.237079   81116 retry.go:31] will retry after 341.673836ms: waiting for machine to come up
	I0814 17:37:11.580887   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.581466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.581500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.581392   81116 retry.go:31] will retry after 514.448726ms: waiting for machine to come up
	I0814 17:37:12.098137   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.098652   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.098740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.098642   81116 retry.go:31] will retry after 649.302617ms: waiting for machine to come up
	I0814 17:37:12.749349   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.749777   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.749803   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.749736   81116 retry.go:31] will retry after 897.486278ms: waiting for machine to come up
	I0814 17:37:13.649145   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:13.649666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:13.649698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:13.649621   81116 retry.go:31] will retry after 1.017213079s: waiting for machine to come up
	I0814 17:37:14.669187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:14.669715   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:14.669740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:14.669679   81116 retry.go:31] will retry after 1.014709613s: waiting for machine to come up
	I0814 17:37:15.685748   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:15.686269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:15.686299   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:15.686217   81116 retry.go:31] will retry after 1.476940798s: waiting for machine to come up
	I0814 17:37:17.164541   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:17.165093   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:17.165122   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:17.165017   81116 retry.go:31] will retry after 1.644726601s: waiting for machine to come up
	I0814 17:37:18.811628   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:18.812199   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:18.812224   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:18.812132   81116 retry.go:31] will retry after 2.740531885s: waiting for machine to come up
	I0814 17:37:21.555057   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:21.555530   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:21.555562   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:21.555484   81116 retry.go:31] will retry after 3.159225533s: waiting for machine to come up
	I0814 17:37:24.716173   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:24.716482   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:24.716507   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:24.716451   81116 retry.go:31] will retry after 3.32732131s: waiting for machine to come up
	I0814 17:37:28.045690   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046151   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has current primary IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046177   80228 main.go:141] libmachine: (old-k8s-version-505584) Found IP for machine: 192.168.72.49
	I0814 17:37:28.046192   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserving static IP address...
	I0814 17:37:28.046500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.046524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | skip adding static IP to network mk-old-k8s-version-505584 - found existing host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"}
	I0814 17:37:28.046540   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserved static IP address: 192.168.72.49
	I0814 17:37:28.046559   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting for SSH to be available...
	I0814 17:37:28.046571   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Getting to WaitForSSH function...
	I0814 17:37:28.048709   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049082   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.049106   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049252   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH client type: external
	I0814 17:37:28.049285   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa (-rw-------)
	I0814 17:37:28.049325   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:28.049342   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | About to run SSH command:
	I0814 17:37:28.049356   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | exit 0
	I0814 17:37:28.179844   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:28.180193   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetConfigRaw
	I0814 17:37:28.180865   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.183617   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184074   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.184118   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184367   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:37:28.184641   80228 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:28.184663   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:28.184891   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.187158   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187517   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.187547   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187696   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.187857   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188027   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188178   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.188320   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.188570   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.188587   80228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:28.303564   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:28.303597   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.303831   80228 buildroot.go:166] provisioning hostname "old-k8s-version-505584"
	I0814 17:37:28.303856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.304021   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.306826   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307180   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.307210   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307415   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.307608   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307769   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307915   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.308131   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.308336   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.308354   80228 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-505584 && echo "old-k8s-version-505584" | sudo tee /etc/hostname
	I0814 17:37:28.434224   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-505584
	
	I0814 17:37:28.434261   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.437350   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437633   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.437666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.438077   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438245   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438395   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.438623   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.438832   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.438857   80228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-505584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-505584/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-505584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:28.564784   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:28.564815   80228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:28.564858   80228 buildroot.go:174] setting up certificates
	I0814 17:37:28.564872   80228 provision.go:84] configureAuth start
	I0814 17:37:28.564884   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.565188   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.568217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.568698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.568731   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.569013   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.571364   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571780   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.571805   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571961   80228 provision.go:143] copyHostCerts
	I0814 17:37:28.572023   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:28.572032   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:28.572076   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:28.572176   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:28.572184   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:28.572206   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:28.572275   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:28.572284   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:28.572337   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:28.572435   80228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-505584 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-505584]
	I0814 17:37:28.804798   80228 provision.go:177] copyRemoteCerts
	I0814 17:37:28.804853   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:28.804879   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.807967   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.808302   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808458   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.808690   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.808874   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.809001   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:28.900346   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:28.926959   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 17:37:28.955373   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:28.984436   80228 provision.go:87] duration metric: took 419.552519ms to configureAuth
	I0814 17:37:28.984463   80228 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:28.984630   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:37:28.984713   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.987602   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988077   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.988107   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988237   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.988486   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988641   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988768   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.988986   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.989209   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.989234   80228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:29.262630   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:29.262656   80228 machine.go:97] duration metric: took 1.078000469s to provisionDockerMachine
	I0814 17:37:29.262669   80228 start.go:293] postStartSetup for "old-k8s-version-505584" (driver="kvm2")
	I0814 17:37:29.262683   80228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:29.262704   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.263051   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:29.263082   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.266020   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.266495   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266720   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.266919   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.267093   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.267253   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.354027   80228 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:29.358196   80228 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:29.358224   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:29.358304   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:29.358416   80228 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:29.358543   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:29.367802   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:29.392802   80228 start.go:296] duration metric: took 130.117007ms for postStartSetup
	I0814 17:37:29.392846   80228 fix.go:56] duration metric: took 20.068754346s for fixHost
	I0814 17:37:29.392871   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.395638   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396032   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.396064   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396251   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.396516   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396698   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396893   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.397155   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:29.397326   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:29.397340   80228 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0814 17:37:29.511889   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657049.468340520
	
	I0814 17:37:29.511913   80228 fix.go:216] guest clock: 1723657049.468340520
	I0814 17:37:29.511923   80228 fix.go:229] Guest: 2024-08-14 17:37:29.46834052 +0000 UTC Remote: 2024-08-14 17:37:29.392851248 +0000 UTC m=+223.104093144 (delta=75.489272ms)
	I0814 17:37:29.511983   80228 fix.go:200] guest clock delta is within tolerance: 75.489272ms
	I0814 17:37:29.511996   80228 start.go:83] releasing machines lock for "old-k8s-version-505584", held for 20.187937886s
	I0814 17:37:29.512031   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.512333   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:29.515152   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515487   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.515524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515735   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516299   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516497   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516643   80228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:29.516723   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.516727   80228 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:29.516752   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.519600   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.519751   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520017   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520045   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520164   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520192   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520341   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520423   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520520   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520588   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520646   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520718   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.520780   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.642824   80228 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:29.648834   80228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:29.795482   80228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:29.801407   80228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:29.801486   80228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:29.821662   80228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:29.821684   80228 start.go:495] detecting cgroup driver to use...
	I0814 17:37:29.821761   80228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:29.843829   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:29.859505   80228 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:29.859590   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:29.873790   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:29.889295   80228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:30.035909   80228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:30.209521   80228 docker.go:233] disabling docker service ...
	I0814 17:37:30.209574   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:30.226980   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:30.241678   80228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:30.375116   80228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:30.498357   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:30.512272   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:30.533062   80228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 17:37:30.533130   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.543595   80228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:30.543664   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.554139   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.564417   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.574627   80228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:30.584957   80228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:30.594667   80228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:30.594720   80228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:30.606826   80228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:30.621990   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:30.758992   80228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:30.915494   80228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:30.915572   80228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:30.920692   80228 start.go:563] Will wait 60s for crictl version
	I0814 17:37:30.920767   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:30.924365   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:30.964662   80228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:30.964756   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:30.995534   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:31.025400   80228 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 17:37:31.026943   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:31.030217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030630   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:31.030665   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030943   80228 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:31.034960   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:31.047742   80228 kubeadm.go:883] updating cluster {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:31.047864   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:37:31.047926   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:31.092203   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:31.092278   80228 ssh_runner.go:195] Run: which lz4
	I0814 17:37:31.096471   80228 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0814 17:37:31.100610   80228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:31.100642   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 17:37:32.582064   80228 crio.go:462] duration metric: took 1.485645107s to copy over tarball
	I0814 17:37:32.582151   80228 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:35.556765   80228 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.974581109s)
	I0814 17:37:35.556795   80228 crio.go:469] duration metric: took 2.9747s to extract the tarball
	I0814 17:37:35.556802   80228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:35.599129   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:35.632752   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:35.632775   80228 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:35.632831   80228 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.632864   80228 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.632892   80228 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 17:37:35.632911   80228 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.632944   80228 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.633112   80228 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634793   80228 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.634821   80228 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 17:37:35.634824   80228 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634885   80228 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.634910   80228 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.635009   80228 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.635082   80228 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.635265   80228 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.905566   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 17:37:35.953168   80228 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 17:37:35.953210   80228 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 17:37:35.953260   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:35.957961   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:35.978859   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.978920   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.988556   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.993281   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.997933   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.018501   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.043527   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.146739   80228 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 17:37:36.146812   80228 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 17:37:36.146832   80228 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.146852   80228 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.146881   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.146891   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163832   80228 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 17:37:36.163856   80228 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 17:37:36.163877   80228 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.163889   80228 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.163923   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163924   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163927   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.172482   80228 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 17:37:36.172530   80228 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.172599   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.195157   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.195208   80228 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 17:37:36.195165   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.195242   80228 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.195245   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.195277   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.237454   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 17:37:36.237519   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.237549   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.292614   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.306771   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.306794   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.321893   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.339836   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.339929   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.426588   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.426659   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.433149   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.469717   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:36.477512   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.477583   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.477761   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.538635   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 17:37:36.557712   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 17:37:36.558304   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.700263   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 17:37:36.700333   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 17:37:36.700410   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 17:37:36.700481   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 17:37:36.700527   80228 cache_images.go:92] duration metric: took 1.067740607s to LoadCachedImages
	W0814 17:37:36.700602   80228 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 17:37:36.700623   80228 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0814 17:37:36.700757   80228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-505584 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:36.700846   80228 ssh_runner.go:195] Run: crio config
	I0814 17:37:36.748814   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:37:36.748843   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:36.748860   80228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:36.748885   80228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-505584 NodeName:old-k8s-version-505584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 17:37:36.749053   80228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-505584"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:36.749129   80228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 17:37:36.760058   80228 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:36.760131   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:36.769388   80228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0814 17:37:36.786594   80228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:36.807695   80228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0814 17:37:36.825609   80228 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:36.829296   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:36.841882   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:36.976199   80228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:36.993682   80228 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584 for IP: 192.168.72.49
	I0814 17:37:36.993707   80228 certs.go:194] generating shared ca certs ...
	I0814 17:37:36.993728   80228 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:36.993924   80228 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:36.993985   80228 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:36.993998   80228 certs.go:256] generating profile certs ...
	I0814 17:37:36.994115   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.key
	I0814 17:37:36.994206   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key.c375770f
	I0814 17:37:36.994261   80228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key
	I0814 17:37:36.994428   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:36.994478   80228 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:36.994492   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:36.994522   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:36.994557   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:36.994603   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:36.994661   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:36.995558   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:37.043910   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:37.073810   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:37.097939   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:37.124449   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 17:37:37.154747   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:37.179474   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:37.204471   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:37.228579   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:37.266929   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:37.292912   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:37.316803   80228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:37.332934   80228 ssh_runner.go:195] Run: openssl version
	I0814 17:37:37.339316   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:37.349829   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354230   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354297   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.360089   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:37.371417   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:37.381777   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385894   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385955   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.391826   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:37.402049   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:37.412038   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416395   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416448   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.421794   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:37.431868   80228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:37.436305   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:37.442838   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:37.448991   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:37.454769   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:37.460381   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:37.466406   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:37.472466   80228 kubeadm.go:392] StartCluster: {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:37.472584   80228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:37.472636   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.508256   80228 cri.go:89] found id: ""
	I0814 17:37:37.508323   80228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:37.518824   80228 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:37.518856   80228 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:37.518941   80228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:37.529328   80228 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:37.530242   80228 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-505584" does not appear in /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:37.530890   80228 kubeconfig.go:62] /home/jenkins/minikube-integration/19446-13977/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-505584" cluster setting kubeconfig missing "old-k8s-version-505584" context setting]
	I0814 17:37:37.531922   80228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:37.539843   80228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:37.550012   80228 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.49
	I0814 17:37:37.550051   80228 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:37.550063   80228 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:37.550113   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.590226   80228 cri.go:89] found id: ""
	I0814 17:37:37.590307   80228 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:37.606242   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:37.615340   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:37.615377   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:37.615436   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:37:37.623996   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:37.624063   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:37.633356   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:37:37.642888   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:37.642958   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:37.652532   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.661607   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:37.661679   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.670876   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:37:37.679780   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:37.679846   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:37.690044   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:37.699617   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:37.813799   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.666487   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.901307   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.029983   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.139056   80228 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:39.139133   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:39.639191   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.139315   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.639292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:41.139421   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:41.639312   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.139387   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.639981   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.139499   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.639391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.139425   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.639677   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.139466   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.639426   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:46.140065   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:46.640043   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.139213   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.639848   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.140080   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.639961   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.139473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.639212   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.139781   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.640028   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.140140   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.639969   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.139918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.639403   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.139921   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.640224   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.140272   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.639242   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.139908   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.639233   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:56.139955   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:56.639799   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.140184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.639918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.139310   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.639393   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.140139   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.639614   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.139472   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.139946   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.639422   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.139858   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.639412   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.140047   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.640170   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.139779   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.639728   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.139343   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.640249   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.139448   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.639416   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.140176   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.639682   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.140063   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.640014   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.139435   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.639256   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.139949   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.640283   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:11.139394   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:11.640107   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.140034   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.639463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.139428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.639575   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.140005   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.639473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.140124   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.639459   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:16.139187   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:16.639219   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.139463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.639839   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.140251   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.639890   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.139999   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.639652   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.139316   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.639809   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:21.139471   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:21.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.139292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.640151   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.139450   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.639996   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.139447   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.639267   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.139595   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.639451   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:26.140190   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:26.640120   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.140141   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.640184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.139896   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.140246   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.639895   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.139860   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.639358   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:31.140029   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:31.639317   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.140039   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.139240   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.640181   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.139789   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.639297   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.139871   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.639347   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:36.140044   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:36.640132   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.139254   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.639457   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.139928   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.639196   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:39.139906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:39.139980   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:39.179494   80228 cri.go:89] found id: ""
	I0814 17:38:39.179524   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.179535   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:39.179543   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:39.179619   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:39.210704   80228 cri.go:89] found id: ""
	I0814 17:38:39.210732   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.210741   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:39.210746   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:39.210796   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:39.247562   80228 cri.go:89] found id: ""
	I0814 17:38:39.247590   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.247597   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:39.247603   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:39.247665   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:39.281456   80228 cri.go:89] found id: ""
	I0814 17:38:39.281480   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.281488   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:39.281494   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:39.281553   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:39.318588   80228 cri.go:89] found id: ""
	I0814 17:38:39.318620   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.318630   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:39.318638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:39.318695   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:39.350270   80228 cri.go:89] found id: ""
	I0814 17:38:39.350294   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.350303   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:39.350311   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:39.350387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:39.382168   80228 cri.go:89] found id: ""
	I0814 17:38:39.382198   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.382209   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:39.382215   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:39.382325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:39.415307   80228 cri.go:89] found id: ""
	I0814 17:38:39.415342   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.415354   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:39.415375   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:39.415388   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:39.469591   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:39.469632   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:39.482909   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:39.482942   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:39.609874   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:39.609906   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:39.609921   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:39.683210   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:39.683253   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:42.222687   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:42.235017   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:42.235088   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:42.285518   80228 cri.go:89] found id: ""
	I0814 17:38:42.285544   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.285553   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:42.285559   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:42.285614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:42.320462   80228 cri.go:89] found id: ""
	I0814 17:38:42.320492   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.320500   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:42.320506   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:42.320594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:42.353484   80228 cri.go:89] found id: ""
	I0814 17:38:42.353515   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.353528   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:42.353537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:42.353614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:42.388122   80228 cri.go:89] found id: ""
	I0814 17:38:42.388152   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.388163   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:42.388171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:42.388239   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:42.420246   80228 cri.go:89] found id: ""
	I0814 17:38:42.420275   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.420285   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:42.420293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:42.420359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:42.454636   80228 cri.go:89] found id: ""
	I0814 17:38:42.454669   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.454680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:42.454687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:42.454749   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:42.494638   80228 cri.go:89] found id: ""
	I0814 17:38:42.494670   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.494679   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:42.494686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:42.494751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:42.532224   80228 cri.go:89] found id: ""
	I0814 17:38:42.532257   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.532269   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:42.532281   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:42.532296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:42.546100   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:42.546132   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:42.616561   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:42.616589   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:42.616604   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:42.697269   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:42.697305   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:42.737787   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:42.737821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.293788   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:45.309020   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:45.309080   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:45.349218   80228 cri.go:89] found id: ""
	I0814 17:38:45.349246   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.349254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:45.349260   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:45.349318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:45.387622   80228 cri.go:89] found id: ""
	I0814 17:38:45.387653   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.387664   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:45.387672   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:45.387750   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:45.422120   80228 cri.go:89] found id: ""
	I0814 17:38:45.422154   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.422164   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:45.422169   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:45.422226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:45.457309   80228 cri.go:89] found id: ""
	I0814 17:38:45.457337   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.457352   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:45.457361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:45.457412   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:45.488969   80228 cri.go:89] found id: ""
	I0814 17:38:45.489000   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.489011   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:45.489019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:45.489081   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:45.522230   80228 cri.go:89] found id: ""
	I0814 17:38:45.522258   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.522273   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:45.522280   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:45.522345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:45.555394   80228 cri.go:89] found id: ""
	I0814 17:38:45.555425   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.555440   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:45.555448   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:45.555520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:45.587870   80228 cri.go:89] found id: ""
	I0814 17:38:45.587899   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.587910   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:45.587934   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:45.587951   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.638662   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:45.638709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:45.652217   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:45.652248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:45.733611   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:45.733635   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:45.733648   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:45.822733   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:45.822773   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:48.361519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:48.374848   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:48.374916   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:48.410849   80228 cri.go:89] found id: ""
	I0814 17:38:48.410897   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.410911   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:48.410920   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:48.410986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:48.448507   80228 cri.go:89] found id: ""
	I0814 17:38:48.448530   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.448537   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:48.448543   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:48.448594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:48.486257   80228 cri.go:89] found id: ""
	I0814 17:38:48.486298   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.486306   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:48.486312   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:48.486363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:48.520447   80228 cri.go:89] found id: ""
	I0814 17:38:48.520473   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.520482   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:48.520487   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:48.520544   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:48.552659   80228 cri.go:89] found id: ""
	I0814 17:38:48.552690   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.552698   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:48.552704   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:48.552768   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:48.585302   80228 cri.go:89] found id: ""
	I0814 17:38:48.585331   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.585341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:48.585348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:48.585415   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:48.617388   80228 cri.go:89] found id: ""
	I0814 17:38:48.617417   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.617428   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:48.617435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:48.617503   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:48.658987   80228 cri.go:89] found id: ""
	I0814 17:38:48.659012   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.659019   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:48.659027   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:48.659041   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:48.719882   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:48.719915   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:48.738962   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:48.738994   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:48.807703   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:48.807727   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:48.807739   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:48.886555   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:48.886585   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:51.423653   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:51.436700   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:51.436792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:51.473198   80228 cri.go:89] found id: ""
	I0814 17:38:51.473227   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.473256   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:51.473262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:51.473311   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:51.508631   80228 cri.go:89] found id: ""
	I0814 17:38:51.508664   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.508675   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:51.508682   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:51.508743   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:51.540917   80228 cri.go:89] found id: ""
	I0814 17:38:51.540950   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.540958   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:51.540965   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:51.541014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:51.578112   80228 cri.go:89] found id: ""
	I0814 17:38:51.578140   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.578150   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:51.578158   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:51.578220   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:51.612664   80228 cri.go:89] found id: ""
	I0814 17:38:51.612692   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.612700   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:51.612706   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:51.612756   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:51.646374   80228 cri.go:89] found id: ""
	I0814 17:38:51.646399   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.646407   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:51.646413   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:51.646463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:51.682052   80228 cri.go:89] found id: ""
	I0814 17:38:51.682081   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.682092   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:51.682098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:51.682147   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:51.722625   80228 cri.go:89] found id: ""
	I0814 17:38:51.722653   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.722663   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:51.722674   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:51.722687   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:51.771788   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:51.771818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:51.785403   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:51.785432   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:51.854249   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:51.854269   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:51.854281   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:51.938121   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:51.938155   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.475672   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:54.491309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:54.491399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:54.524971   80228 cri.go:89] found id: ""
	I0814 17:38:54.525001   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.525011   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:54.525023   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:54.525087   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:54.560631   80228 cri.go:89] found id: ""
	I0814 17:38:54.560661   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.560670   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:54.560675   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:54.560728   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:54.595710   80228 cri.go:89] found id: ""
	I0814 17:38:54.595740   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.595751   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:54.595759   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:54.595824   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:54.631449   80228 cri.go:89] found id: ""
	I0814 17:38:54.631476   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.631487   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:54.631495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:54.631557   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:54.666492   80228 cri.go:89] found id: ""
	I0814 17:38:54.666526   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.666539   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:54.666548   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:54.666617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:54.701111   80228 cri.go:89] found id: ""
	I0814 17:38:54.701146   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.701158   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:54.701166   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:54.701226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:54.737535   80228 cri.go:89] found id: ""
	I0814 17:38:54.737574   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.737585   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:54.737595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:54.737653   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:54.771658   80228 cri.go:89] found id: ""
	I0814 17:38:54.771679   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.771686   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:54.771694   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:54.771709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:54.841798   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:54.841817   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:54.841829   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:54.930861   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:54.930917   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.970508   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:54.970540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:55.023077   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:55.023123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:57.538876   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:57.551796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:57.551868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:57.584576   80228 cri.go:89] found id: ""
	I0814 17:38:57.584601   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.584609   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:57.584617   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:57.584687   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:57.617209   80228 cri.go:89] found id: ""
	I0814 17:38:57.617239   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.617249   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:57.617257   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:57.617338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:57.650062   80228 cri.go:89] found id: ""
	I0814 17:38:57.650089   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.650096   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:57.650102   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:57.650160   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:57.681118   80228 cri.go:89] found id: ""
	I0814 17:38:57.681146   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.681154   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:57.681160   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:57.681228   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:57.713803   80228 cri.go:89] found id: ""
	I0814 17:38:57.713834   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.713842   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:57.713851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:57.713920   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:57.749555   80228 cri.go:89] found id: ""
	I0814 17:38:57.749594   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.749604   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:57.749613   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:57.749677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:57.782714   80228 cri.go:89] found id: ""
	I0814 17:38:57.782744   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.782755   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:57.782763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:57.782826   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:57.815386   80228 cri.go:89] found id: ""
	I0814 17:38:57.815414   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.815423   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:57.815436   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:57.815450   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:57.868153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:57.868183   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:57.881022   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:57.881053   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:57.950474   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:57.950501   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:57.950515   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:58.032611   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:58.032644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:00.569493   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:00.583257   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:00.583384   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:00.614680   80228 cri.go:89] found id: ""
	I0814 17:39:00.614712   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.614723   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:00.614732   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:00.614792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:00.648161   80228 cri.go:89] found id: ""
	I0814 17:39:00.648189   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.648196   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:00.648203   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:00.648256   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:00.681844   80228 cri.go:89] found id: ""
	I0814 17:39:00.681872   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.681883   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:00.681890   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:00.681952   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:00.714773   80228 cri.go:89] found id: ""
	I0814 17:39:00.714804   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.714815   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:00.714823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:00.714891   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:00.747748   80228 cri.go:89] found id: ""
	I0814 17:39:00.747774   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.747781   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:00.747787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:00.747845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:00.783135   80228 cri.go:89] found id: ""
	I0814 17:39:00.783168   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.783179   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:00.783186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:00.783242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:00.817505   80228 cri.go:89] found id: ""
	I0814 17:39:00.817541   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.817552   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:00.817567   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:00.817633   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:00.849205   80228 cri.go:89] found id: ""
	I0814 17:39:00.849231   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.849241   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:00.849252   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:00.849273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:00.902529   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:00.902567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:00.916313   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:00.916346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:00.988708   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:00.988725   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:00.988737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:01.063818   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:01.063853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:03.603241   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:03.616400   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:03.616504   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:03.649580   80228 cri.go:89] found id: ""
	I0814 17:39:03.649619   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.649637   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:03.649650   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:03.649718   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:03.686252   80228 cri.go:89] found id: ""
	I0814 17:39:03.686274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.686282   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:03.686289   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:03.686349   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:03.720995   80228 cri.go:89] found id: ""
	I0814 17:39:03.721024   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.721036   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:03.721043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:03.721094   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:03.753466   80228 cri.go:89] found id: ""
	I0814 17:39:03.753491   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.753500   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:03.753506   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:03.753554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:03.794427   80228 cri.go:89] found id: ""
	I0814 17:39:03.794450   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.794458   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:03.794464   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:03.794524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:03.826245   80228 cri.go:89] found id: ""
	I0814 17:39:03.826274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.826282   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:03.826288   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:03.826355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:03.857208   80228 cri.go:89] found id: ""
	I0814 17:39:03.857238   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.857247   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:03.857253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:03.857325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:03.892840   80228 cri.go:89] found id: ""
	I0814 17:39:03.892864   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.892871   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:03.892879   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:03.892891   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:03.948554   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:03.948579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:03.962222   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:03.962249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:04.031833   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:04.031859   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:04.031875   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:04.109572   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:04.109636   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:06.646923   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:06.659699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:06.659757   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:06.691918   80228 cri.go:89] found id: ""
	I0814 17:39:06.691941   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.691951   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:06.691958   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:06.692016   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:06.722675   80228 cri.go:89] found id: ""
	I0814 17:39:06.722703   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.722713   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:06.722720   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:06.722782   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:06.757215   80228 cri.go:89] found id: ""
	I0814 17:39:06.757248   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.757259   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:06.757266   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:06.757333   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:06.791337   80228 cri.go:89] found id: ""
	I0814 17:39:06.791370   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.791378   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:06.791384   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:06.791440   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:06.825182   80228 cri.go:89] found id: ""
	I0814 17:39:06.825209   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.825220   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:06.825234   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:06.825288   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:06.857473   80228 cri.go:89] found id: ""
	I0814 17:39:06.857498   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.857507   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:06.857514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:06.857582   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:06.891293   80228 cri.go:89] found id: ""
	I0814 17:39:06.891343   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.891355   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:06.891363   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:06.891421   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:06.927476   80228 cri.go:89] found id: ""
	I0814 17:39:06.927505   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.927516   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:06.927527   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:06.927541   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:06.980604   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:06.980635   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:06.994648   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:06.994678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:07.072554   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:07.072580   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:07.072599   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:07.153141   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:07.153182   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:09.693348   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:09.705754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:09.705814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:09.739674   80228 cri.go:89] found id: ""
	I0814 17:39:09.739706   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.739717   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:09.739724   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:09.739788   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:09.774381   80228 cri.go:89] found id: ""
	I0814 17:39:09.774405   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.774413   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:09.774420   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:09.774478   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:09.806586   80228 cri.go:89] found id: ""
	I0814 17:39:09.806614   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.806623   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:09.806629   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:09.806696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:09.839564   80228 cri.go:89] found id: ""
	I0814 17:39:09.839594   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.839602   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:09.839614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:09.839672   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:09.872338   80228 cri.go:89] found id: ""
	I0814 17:39:09.872373   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.872385   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:09.872393   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:09.872457   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:09.904184   80228 cri.go:89] found id: ""
	I0814 17:39:09.904223   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.904231   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:09.904253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:09.904312   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:09.937217   80228 cri.go:89] found id: ""
	I0814 17:39:09.937242   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.937251   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:09.937259   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:09.937322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:09.972273   80228 cri.go:89] found id: ""
	I0814 17:39:09.972301   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.972313   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:09.972325   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:09.972341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:10.023736   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:10.023764   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:10.036654   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:10.036681   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:10.104031   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:10.104052   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:10.104068   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:10.187816   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:10.187853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:12.727237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:12.741970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:12.742041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:12.778721   80228 cri.go:89] found id: ""
	I0814 17:39:12.778748   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.778758   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:12.778765   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:12.778820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:12.812575   80228 cri.go:89] found id: ""
	I0814 17:39:12.812603   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.812610   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:12.812619   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:12.812678   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:12.845697   80228 cri.go:89] found id: ""
	I0814 17:39:12.845726   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.845737   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:12.845744   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:12.845809   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:12.879491   80228 cri.go:89] found id: ""
	I0814 17:39:12.879518   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.879529   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:12.879536   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:12.879604   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:12.912321   80228 cri.go:89] found id: ""
	I0814 17:39:12.912348   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.912356   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:12.912361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:12.912410   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:12.948866   80228 cri.go:89] found id: ""
	I0814 17:39:12.948889   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.948897   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:12.948903   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:12.948963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:12.983394   80228 cri.go:89] found id: ""
	I0814 17:39:12.983444   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.983459   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:12.983466   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:12.983530   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:13.018406   80228 cri.go:89] found id: ""
	I0814 17:39:13.018427   80228 logs.go:276] 0 containers: []
	W0814 17:39:13.018434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:13.018442   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:13.018457   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.069615   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:13.069655   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:13.082618   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:13.082651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:13.145033   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:13.145054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:13.145067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:13.225074   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:13.225108   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:15.765512   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:15.778320   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:15.778380   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:15.812847   80228 cri.go:89] found id: ""
	I0814 17:39:15.812876   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.812885   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:15.812896   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:15.812944   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:15.845131   80228 cri.go:89] found id: ""
	I0814 17:39:15.845159   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.845169   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:15.845176   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:15.845242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:15.879763   80228 cri.go:89] found id: ""
	I0814 17:39:15.879789   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.879799   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:15.879807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:15.879864   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:15.912746   80228 cri.go:89] found id: ""
	I0814 17:39:15.912776   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.912784   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:15.912797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:15.912858   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:15.946433   80228 cri.go:89] found id: ""
	I0814 17:39:15.946456   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.946465   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:15.946473   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:15.946534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:15.980060   80228 cri.go:89] found id: ""
	I0814 17:39:15.980086   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.980096   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:15.980103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:15.980167   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:16.011539   80228 cri.go:89] found id: ""
	I0814 17:39:16.011570   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.011581   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:16.011590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:16.011660   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:16.046019   80228 cri.go:89] found id: ""
	I0814 17:39:16.046046   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.046057   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:16.046068   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:16.046083   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:16.058442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:16.058470   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:16.132775   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:16.132799   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:16.132811   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:16.218360   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:16.218398   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:16.258070   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:16.258096   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:18.813127   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:18.826187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:18.826267   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:18.858405   80228 cri.go:89] found id: ""
	I0814 17:39:18.858433   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.858444   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:18.858452   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:18.858524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:18.893302   80228 cri.go:89] found id: ""
	I0814 17:39:18.893335   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.893342   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:18.893350   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:18.893417   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:18.929885   80228 cri.go:89] found id: ""
	I0814 17:39:18.929919   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.929929   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:18.929937   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:18.930000   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:18.966758   80228 cri.go:89] found id: ""
	I0814 17:39:18.966783   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.966792   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:18.966799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:18.966861   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:18.999815   80228 cri.go:89] found id: ""
	I0814 17:39:18.999838   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.999845   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:18.999851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:18.999915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:19.033737   80228 cri.go:89] found id: ""
	I0814 17:39:19.033761   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.033768   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:19.033774   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:19.033830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:19.070080   80228 cri.go:89] found id: ""
	I0814 17:39:19.070105   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.070113   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:19.070119   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:19.070190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:19.102868   80228 cri.go:89] found id: ""
	I0814 17:39:19.102897   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.102907   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:19.102918   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:19.102932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:19.156525   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:19.156569   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:19.170193   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:19.170225   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:19.236521   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:19.236547   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:19.236561   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:19.315984   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:19.316024   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:21.855554   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:21.868457   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:21.868527   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:21.902098   80228 cri.go:89] found id: ""
	I0814 17:39:21.902124   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.902132   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:21.902139   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:21.902200   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:21.934876   80228 cri.go:89] found id: ""
	I0814 17:39:21.934908   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.934919   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:21.934926   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:21.934987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:21.976507   80228 cri.go:89] found id: ""
	I0814 17:39:21.976536   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.976548   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:21.976555   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:21.976617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:22.013876   80228 cri.go:89] found id: ""
	I0814 17:39:22.013897   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.013904   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:22.013909   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:22.013955   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:22.051943   80228 cri.go:89] found id: ""
	I0814 17:39:22.051969   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.051979   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:22.051999   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:22.052064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:22.084702   80228 cri.go:89] found id: ""
	I0814 17:39:22.084725   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.084733   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:22.084738   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:22.084784   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:22.117397   80228 cri.go:89] found id: ""
	I0814 17:39:22.117424   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.117432   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:22.117439   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:22.117490   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:22.154139   80228 cri.go:89] found id: ""
	I0814 17:39:22.154168   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.154178   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:22.154189   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:22.154201   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:22.205550   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:22.205580   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:22.219644   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:22.219679   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:22.288934   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:22.288957   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:22.288969   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:22.372917   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:22.372954   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:24.912578   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:24.925365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:24.925430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:24.961207   80228 cri.go:89] found id: ""
	I0814 17:39:24.961234   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.961248   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:24.961257   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:24.961339   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:24.998878   80228 cri.go:89] found id: ""
	I0814 17:39:24.998904   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.998911   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:24.998918   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:24.998971   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:25.034141   80228 cri.go:89] found id: ""
	I0814 17:39:25.034174   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.034187   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:25.034196   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:25.034274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:25.075634   80228 cri.go:89] found id: ""
	I0814 17:39:25.075667   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.075679   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:25.075688   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:25.075759   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:25.112890   80228 cri.go:89] found id: ""
	I0814 17:39:25.112929   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.112939   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:25.112946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:25.113007   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:25.152887   80228 cri.go:89] found id: ""
	I0814 17:39:25.152913   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.152921   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:25.152927   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:25.152987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:25.186421   80228 cri.go:89] found id: ""
	I0814 17:39:25.186452   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.186463   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:25.186471   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:25.186537   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:25.220390   80228 cri.go:89] found id: ""
	I0814 17:39:25.220417   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.220425   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:25.220432   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:25.220446   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:25.296112   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:25.296146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:25.335421   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:25.335449   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:25.387690   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:25.387718   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:25.401926   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:25.401957   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:25.471111   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:27.972237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:27.985512   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:27.985575   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:28.019454   80228 cri.go:89] found id: ""
	I0814 17:39:28.019482   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.019493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:28.019502   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:28.019566   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:28.056908   80228 cri.go:89] found id: ""
	I0814 17:39:28.056931   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.056939   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:28.056944   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:28.056998   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:28.090678   80228 cri.go:89] found id: ""
	I0814 17:39:28.090707   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.090715   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:28.090721   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:28.090785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:28.125557   80228 cri.go:89] found id: ""
	I0814 17:39:28.125591   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.125609   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:28.125620   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:28.125682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:28.158092   80228 cri.go:89] found id: ""
	I0814 17:39:28.158121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.158129   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:28.158135   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:28.158191   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:28.193403   80228 cri.go:89] found id: ""
	I0814 17:39:28.193434   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.193445   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:28.193454   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:28.193524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:28.231095   80228 cri.go:89] found id: ""
	I0814 17:39:28.231121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.231131   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:28.231139   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:28.231203   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:28.280157   80228 cri.go:89] found id: ""
	I0814 17:39:28.280185   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.280196   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:28.280207   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:28.280220   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:28.352877   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:28.352894   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:28.352906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:28.439692   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:28.439736   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:28.479986   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:28.480012   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:28.538454   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:28.538493   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.052941   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:31.065810   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:31.065879   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:31.097988   80228 cri.go:89] found id: ""
	I0814 17:39:31.098013   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.098020   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:31.098045   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:31.098102   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:31.130126   80228 cri.go:89] found id: ""
	I0814 17:39:31.130152   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.130160   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:31.130166   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:31.130225   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:31.165945   80228 cri.go:89] found id: ""
	I0814 17:39:31.165984   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.165995   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:31.166003   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:31.166064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:31.199749   80228 cri.go:89] found id: ""
	I0814 17:39:31.199772   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.199778   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:31.199784   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:31.199843   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:31.231398   80228 cri.go:89] found id: ""
	I0814 17:39:31.231425   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.231436   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:31.231444   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:31.231528   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:31.263842   80228 cri.go:89] found id: ""
	I0814 17:39:31.263868   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.263878   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:31.263885   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:31.263950   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:31.299258   80228 cri.go:89] found id: ""
	I0814 17:39:31.299289   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.299301   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:31.299309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:31.299399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:31.332626   80228 cri.go:89] found id: ""
	I0814 17:39:31.332649   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.332657   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:31.332666   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:31.332678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:31.369262   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:31.369288   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:31.426003   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:31.426034   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.439583   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:31.439611   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:31.507538   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:31.507563   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:31.507583   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.085342   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:34.097491   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:34.097567   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:34.129220   80228 cri.go:89] found id: ""
	I0814 17:39:34.129244   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.129254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:34.129262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:34.129322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:34.161233   80228 cri.go:89] found id: ""
	I0814 17:39:34.161256   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.161264   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:34.161270   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:34.161334   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:34.193649   80228 cri.go:89] found id: ""
	I0814 17:39:34.193675   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.193683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:34.193689   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:34.193754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:34.226722   80228 cri.go:89] found id: ""
	I0814 17:39:34.226753   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.226763   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:34.226772   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:34.226842   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:34.259735   80228 cri.go:89] found id: ""
	I0814 17:39:34.259761   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.259774   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:34.259787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:34.259850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:34.296804   80228 cri.go:89] found id: ""
	I0814 17:39:34.296830   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.296838   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:34.296844   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:34.296894   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:34.328942   80228 cri.go:89] found id: ""
	I0814 17:39:34.328973   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.328982   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:34.328988   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:34.329041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:34.360820   80228 cri.go:89] found id: ""
	I0814 17:39:34.360847   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.360858   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:34.360868   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:34.360882   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:34.411106   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:34.411142   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:34.424737   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:34.424769   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:34.489094   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:34.489122   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:34.489138   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.569783   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:34.569818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.107492   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:37.120829   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:37.120901   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:37.154556   80228 cri.go:89] found id: ""
	I0814 17:39:37.154589   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.154601   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:37.154609   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:37.154673   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:37.192570   80228 cri.go:89] found id: ""
	I0814 17:39:37.192602   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.192609   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:37.192615   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:37.192679   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:37.225845   80228 cri.go:89] found id: ""
	I0814 17:39:37.225891   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.225902   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:37.225917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:37.225986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:37.262370   80228 cri.go:89] found id: ""
	I0814 17:39:37.262399   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.262408   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:37.262416   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:37.262481   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:37.297642   80228 cri.go:89] found id: ""
	I0814 17:39:37.297669   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.297680   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:37.297687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:37.297754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:37.331006   80228 cri.go:89] found id: ""
	I0814 17:39:37.331032   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.331041   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:37.331046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:37.331111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:37.364753   80228 cri.go:89] found id: ""
	I0814 17:39:37.364777   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.364786   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:37.364792   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:37.364850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:37.397722   80228 cri.go:89] found id: ""
	I0814 17:39:37.397748   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.397760   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:37.397770   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:37.397785   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:37.473616   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:37.473643   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:37.473659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:37.557672   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:37.557710   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.596337   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:37.596368   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:37.646815   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:37.646853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.160391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:40.174099   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:40.174181   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:40.208783   80228 cri.go:89] found id: ""
	I0814 17:39:40.208814   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.208821   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:40.208829   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:40.208880   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:40.243555   80228 cri.go:89] found id: ""
	I0814 17:39:40.243580   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.243588   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:40.243594   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:40.243661   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:40.276685   80228 cri.go:89] found id: ""
	I0814 17:39:40.276711   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.276723   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:40.276731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:40.276795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:40.309893   80228 cri.go:89] found id: ""
	I0814 17:39:40.309925   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.309937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:40.309944   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:40.310073   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:40.341724   80228 cri.go:89] found id: ""
	I0814 17:39:40.341751   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.341762   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:40.341770   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:40.341834   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:40.376442   80228 cri.go:89] found id: ""
	I0814 17:39:40.376478   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.376487   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:40.376495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:40.376558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:40.419240   80228 cri.go:89] found id: ""
	I0814 17:39:40.419269   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.419277   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:40.419284   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:40.419374   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:40.464678   80228 cri.go:89] found id: ""
	I0814 17:39:40.464703   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.464712   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:40.464721   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:40.464737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:40.531138   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:40.531175   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.546809   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:40.546842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:40.618791   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:40.618809   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:40.618821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:40.706169   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:40.706219   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:43.250987   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:43.266109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:43.266179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:43.301860   80228 cri.go:89] found id: ""
	I0814 17:39:43.301891   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.301899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:43.301908   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:43.301991   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:43.337166   80228 cri.go:89] found id: ""
	I0814 17:39:43.337195   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.337205   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:43.337212   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:43.337262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:43.370640   80228 cri.go:89] found id: ""
	I0814 17:39:43.370671   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.370683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:43.370696   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:43.370752   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:43.405598   80228 cri.go:89] found id: ""
	I0814 17:39:43.405624   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.405632   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:43.405638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:43.405705   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:43.437161   80228 cri.go:89] found id: ""
	I0814 17:39:43.437184   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.437192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:43.437198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:43.437295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:43.470675   80228 cri.go:89] found id: ""
	I0814 17:39:43.470707   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.470718   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:43.470726   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:43.470787   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:43.503036   80228 cri.go:89] found id: ""
	I0814 17:39:43.503062   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.503073   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:43.503081   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:43.503149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:43.538269   80228 cri.go:89] found id: ""
	I0814 17:39:43.538296   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.538304   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:43.538328   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:43.538340   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:43.621889   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:43.621936   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:43.667460   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:43.667491   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:43.723630   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:43.723663   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:43.738905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:43.738939   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:43.805484   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:46.306031   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:46.324624   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:46.324696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:46.360039   80228 cri.go:89] found id: ""
	I0814 17:39:46.360066   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.360074   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:46.360082   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:46.360131   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:46.413735   80228 cri.go:89] found id: ""
	I0814 17:39:46.413767   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.413779   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:46.413788   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:46.413876   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:46.458823   80228 cri.go:89] found id: ""
	I0814 17:39:46.458851   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.458861   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:46.458869   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:46.458928   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:46.495347   80228 cri.go:89] found id: ""
	I0814 17:39:46.495378   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.495387   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:46.495392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:46.495441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:46.531502   80228 cri.go:89] found id: ""
	I0814 17:39:46.531533   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.531545   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:46.531554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:46.531624   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:46.564450   80228 cri.go:89] found id: ""
	I0814 17:39:46.564473   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.564482   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:46.564488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:46.564535   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:46.598293   80228 cri.go:89] found id: ""
	I0814 17:39:46.598401   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.598421   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:46.598431   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:46.598498   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:46.632370   80228 cri.go:89] found id: ""
	I0814 17:39:46.632400   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.632411   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:46.632423   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:46.632438   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:46.711814   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:46.711848   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:46.749410   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:46.749443   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:46.801686   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:46.801720   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:46.815196   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:46.815218   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:46.885648   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.386223   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:49.399359   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:49.399430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:49.432133   80228 cri.go:89] found id: ""
	I0814 17:39:49.432168   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.432179   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:49.432186   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:49.432250   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:49.469760   80228 cri.go:89] found id: ""
	I0814 17:39:49.469790   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.469799   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:49.469811   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:49.469873   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:49.500437   80228 cri.go:89] found id: ""
	I0814 17:39:49.500466   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.500474   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:49.500481   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:49.500531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:49.533685   80228 cri.go:89] found id: ""
	I0814 17:39:49.533709   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.533717   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:49.533723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:49.533790   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:49.570551   80228 cri.go:89] found id: ""
	I0814 17:39:49.570577   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.570584   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:49.570590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:49.570654   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:49.606649   80228 cri.go:89] found id: ""
	I0814 17:39:49.606672   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.606680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:49.606686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:49.606734   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:49.638060   80228 cri.go:89] found id: ""
	I0814 17:39:49.638090   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.638101   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:49.638109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:49.638178   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:49.674503   80228 cri.go:89] found id: ""
	I0814 17:39:49.674526   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.674534   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:49.674543   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:49.674563   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:49.710185   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:49.710213   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:49.764112   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:49.764146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:49.777862   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:49.777888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:49.849786   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.849806   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:49.849819   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:52.429811   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:52.444364   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:52.444441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:52.483047   80228 cri.go:89] found id: ""
	I0814 17:39:52.483074   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.483085   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:52.483093   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:52.483157   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:52.520236   80228 cri.go:89] found id: ""
	I0814 17:39:52.520264   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.520274   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:52.520287   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:52.520353   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:52.553757   80228 cri.go:89] found id: ""
	I0814 17:39:52.553784   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.553795   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:52.553802   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:52.553869   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:52.588782   80228 cri.go:89] found id: ""
	I0814 17:39:52.588808   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.588818   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:52.588827   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:52.588893   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:52.620144   80228 cri.go:89] found id: ""
	I0814 17:39:52.620180   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.620192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:52.620201   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:52.620274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:52.652712   80228 cri.go:89] found id: ""
	I0814 17:39:52.652743   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.652755   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:52.652763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:52.652825   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:52.687789   80228 cri.go:89] found id: ""
	I0814 17:39:52.687819   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.687831   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:52.687838   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:52.687892   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:52.718996   80228 cri.go:89] found id: ""
	I0814 17:39:52.719021   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.719031   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:52.719041   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:52.719055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:52.775775   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:52.775808   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:52.789024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:52.789055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:52.863320   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:52.863351   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:52.863366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:52.941533   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:52.941571   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:55.477833   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:55.490723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:55.490783   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:55.525816   80228 cri.go:89] found id: ""
	I0814 17:39:55.525844   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.525852   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:55.525859   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:55.525908   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:55.561855   80228 cri.go:89] found id: ""
	I0814 17:39:55.561878   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.561887   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:55.561892   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:55.561949   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:55.599997   80228 cri.go:89] found id: ""
	I0814 17:39:55.600027   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.600038   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:55.600046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:55.600112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:55.632869   80228 cri.go:89] found id: ""
	I0814 17:39:55.632902   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.632914   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:55.632922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:55.632990   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:55.666029   80228 cri.go:89] found id: ""
	I0814 17:39:55.666055   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.666066   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:55.666079   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:55.666136   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:55.697222   80228 cri.go:89] found id: ""
	I0814 17:39:55.697247   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.697254   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:55.697260   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:55.697308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:55.729517   80228 cri.go:89] found id: ""
	I0814 17:39:55.729549   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.729561   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:55.729576   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:55.729640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:55.763890   80228 cri.go:89] found id: ""
	I0814 17:39:55.763922   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.763934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:55.763944   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:55.763960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:55.819588   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:55.819624   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:55.833281   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:55.833314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:55.904610   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:55.904632   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:55.904644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:55.981035   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:55.981069   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:58.522870   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:58.536151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:58.536224   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:58.568827   80228 cri.go:89] found id: ""
	I0814 17:39:58.568857   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.568869   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:58.568877   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:58.568946   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:58.600523   80228 cri.go:89] found id: ""
	I0814 17:39:58.600554   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.600564   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:58.600571   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:58.600640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:58.634201   80228 cri.go:89] found id: ""
	I0814 17:39:58.634232   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.634240   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:58.634245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:58.634308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:58.668746   80228 cri.go:89] found id: ""
	I0814 17:39:58.668772   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.668781   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:58.668787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:58.668847   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:58.699695   80228 cri.go:89] found id: ""
	I0814 17:39:58.699727   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.699739   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:58.699752   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:58.699836   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:58.731047   80228 cri.go:89] found id: ""
	I0814 17:39:58.731081   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.731095   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:58.731103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:58.731168   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:58.773454   80228 cri.go:89] found id: ""
	I0814 17:39:58.773486   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.773495   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:58.773501   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:58.773561   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:58.810135   80228 cri.go:89] found id: ""
	I0814 17:39:58.810159   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.810166   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:58.810175   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:58.810191   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:58.844897   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:58.844925   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:58.901700   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:58.901745   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:58.914272   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:58.914296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:58.984593   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:58.984610   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:58.984622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:01.563227   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:01.576764   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:01.576840   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:01.610842   80228 cri.go:89] found id: ""
	I0814 17:40:01.610871   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.610878   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:01.610884   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:01.610935   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:01.643774   80228 cri.go:89] found id: ""
	I0814 17:40:01.643806   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.643816   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:01.643824   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:01.643888   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:01.677867   80228 cri.go:89] found id: ""
	I0814 17:40:01.677892   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.677899   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:01.677906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:01.677967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:01.712394   80228 cri.go:89] found id: ""
	I0814 17:40:01.712420   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.712427   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:01.712433   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:01.712492   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:01.745637   80228 cri.go:89] found id: ""
	I0814 17:40:01.745666   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.745676   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:01.745683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:01.745745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:01.782364   80228 cri.go:89] found id: ""
	I0814 17:40:01.782394   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.782404   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:01.782411   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:01.782484   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:01.814569   80228 cri.go:89] found id: ""
	I0814 17:40:01.814596   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.814605   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:01.814614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:01.814674   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:01.850421   80228 cri.go:89] found id: ""
	I0814 17:40:01.850450   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.850459   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:01.850468   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:01.850482   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:01.862965   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:01.863001   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:01.931312   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:01.931357   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:01.931375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:02.008236   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:02.008278   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:02.043238   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:02.043267   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:04.596909   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:04.610091   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:04.610158   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:04.645169   80228 cri.go:89] found id: ""
	I0814 17:40:04.645195   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.645205   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:04.645213   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:04.645279   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:04.677708   80228 cri.go:89] found id: ""
	I0814 17:40:04.677740   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.677750   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:04.677761   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:04.677823   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:04.710319   80228 cri.go:89] found id: ""
	I0814 17:40:04.710351   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.710362   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:04.710374   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:04.710443   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:04.745166   80228 cri.go:89] found id: ""
	I0814 17:40:04.745202   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.745219   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:04.745226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:04.745287   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:04.777307   80228 cri.go:89] found id: ""
	I0814 17:40:04.777354   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.777376   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:04.777383   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:04.777447   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:04.813854   80228 cri.go:89] found id: ""
	I0814 17:40:04.813886   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.813901   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:04.813908   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:04.813972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:04.848014   80228 cri.go:89] found id: ""
	I0814 17:40:04.848041   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.848049   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:04.848055   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:04.848113   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:04.882689   80228 cri.go:89] found id: ""
	I0814 17:40:04.882719   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.882731   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:04.882742   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:04.882760   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:04.952074   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:04.952096   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:04.952112   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:05.030258   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:05.030300   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:05.066509   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:05.066542   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:05.120153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:05.120195   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:07.634404   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:07.646900   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:07.646966   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:07.678654   80228 cri.go:89] found id: ""
	I0814 17:40:07.678680   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.678689   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:07.678696   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:07.678753   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:07.711355   80228 cri.go:89] found id: ""
	I0814 17:40:07.711381   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.711389   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:07.711395   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:07.711446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:07.744134   80228 cri.go:89] found id: ""
	I0814 17:40:07.744161   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.744169   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:07.744179   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:07.744242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:07.776981   80228 cri.go:89] found id: ""
	I0814 17:40:07.777008   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.777015   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:07.777022   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:07.777086   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:07.811626   80228 cri.go:89] found id: ""
	I0814 17:40:07.811651   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.811661   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:07.811667   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:07.811720   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:07.843218   80228 cri.go:89] found id: ""
	I0814 17:40:07.843251   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.843262   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:07.843270   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:07.843355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:07.875208   80228 cri.go:89] found id: ""
	I0814 17:40:07.875232   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.875239   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:07.875245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:07.875295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:07.907896   80228 cri.go:89] found id: ""
	I0814 17:40:07.907923   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.907934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:07.907945   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:07.907960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:07.959717   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:07.959753   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:07.973050   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:07.973081   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:08.035085   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:08.035107   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:08.035120   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:08.109722   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:08.109770   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:10.648203   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:10.661194   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:10.661280   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:10.698401   80228 cri.go:89] found id: ""
	I0814 17:40:10.698431   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.698442   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:10.698450   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:10.698515   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:10.730057   80228 cri.go:89] found id: ""
	I0814 17:40:10.730083   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.730094   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:10.730101   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:10.730163   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:10.768780   80228 cri.go:89] found id: ""
	I0814 17:40:10.768807   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.768817   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:10.768824   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:10.768885   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:10.800866   80228 cri.go:89] found id: ""
	I0814 17:40:10.800898   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.800907   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:10.800917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:10.800984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:10.833741   80228 cri.go:89] found id: ""
	I0814 17:40:10.833771   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.833782   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:10.833789   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:10.833850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:10.865670   80228 cri.go:89] found id: ""
	I0814 17:40:10.865699   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.865706   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:10.865717   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:10.865770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:10.904726   80228 cri.go:89] found id: ""
	I0814 17:40:10.904757   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.904765   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:10.904771   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:10.904821   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:10.940549   80228 cri.go:89] found id: ""
	I0814 17:40:10.940578   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.940588   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:10.940598   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:10.940620   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:10.992592   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:10.992622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:11.006388   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:11.006412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:11.075455   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:11.075473   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:11.075486   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:11.156622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:11.156658   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:13.695055   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:13.709460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:13.709531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:13.741941   80228 cri.go:89] found id: ""
	I0814 17:40:13.741967   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.741975   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:13.741981   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:13.742042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:13.773916   80228 cri.go:89] found id: ""
	I0814 17:40:13.773940   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.773947   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:13.773952   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:13.773999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:13.807871   80228 cri.go:89] found id: ""
	I0814 17:40:13.807902   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.807912   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:13.807918   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:13.807981   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:13.840902   80228 cri.go:89] found id: ""
	I0814 17:40:13.840931   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.840943   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:13.840952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:13.841018   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:13.871969   80228 cri.go:89] found id: ""
	I0814 17:40:13.871998   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.872010   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:13.872019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:13.872090   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:13.905502   80228 cri.go:89] found id: ""
	I0814 17:40:13.905524   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.905531   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:13.905537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:13.905599   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:13.937356   80228 cri.go:89] found id: ""
	I0814 17:40:13.937386   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.937396   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:13.937404   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:13.937466   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:13.972383   80228 cri.go:89] found id: ""
	I0814 17:40:13.972410   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.972418   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:13.972427   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:13.972448   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:14.022691   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:14.022717   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:14.035543   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:14.035567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:14.104869   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:14.104889   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:14.104905   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:14.182185   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:14.182221   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:16.720519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:16.734323   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:16.734406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:16.769454   80228 cri.go:89] found id: ""
	I0814 17:40:16.769483   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.769493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:16.769501   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:16.769565   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:16.801513   80228 cri.go:89] found id: ""
	I0814 17:40:16.801541   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.801548   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:16.801554   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:16.801610   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:16.835184   80228 cri.go:89] found id: ""
	I0814 17:40:16.835212   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.835220   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:16.835226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:16.835275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:16.867162   80228 cri.go:89] found id: ""
	I0814 17:40:16.867192   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.867201   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:16.867207   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:16.867257   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:16.902912   80228 cri.go:89] found id: ""
	I0814 17:40:16.902942   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.902953   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:16.902961   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:16.903026   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:16.935004   80228 cri.go:89] found id: ""
	I0814 17:40:16.935033   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.935044   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:16.935052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:16.935115   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:16.969082   80228 cri.go:89] found id: ""
	I0814 17:40:16.969110   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.969120   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:16.969127   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:16.969194   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:17.002594   80228 cri.go:89] found id: ""
	I0814 17:40:17.002622   80228 logs.go:276] 0 containers: []
	W0814 17:40:17.002633   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:17.002644   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:17.002659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:17.054319   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:17.054359   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:17.068024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:17.068048   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:17.139480   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:17.139499   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:17.139514   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:17.222086   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:17.222140   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:19.758630   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:19.772186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:19.772254   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:19.807719   80228 cri.go:89] found id: ""
	I0814 17:40:19.807751   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.807760   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:19.807766   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:19.807830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:19.851023   80228 cri.go:89] found id: ""
	I0814 17:40:19.851054   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.851067   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:19.851083   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:19.851154   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:19.882961   80228 cri.go:89] found id: ""
	I0814 17:40:19.882987   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.882997   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:19.883005   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:19.883063   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:19.920312   80228 cri.go:89] found id: ""
	I0814 17:40:19.920345   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.920356   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:19.920365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:19.920430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:19.953628   80228 cri.go:89] found id: ""
	I0814 17:40:19.953658   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.953671   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:19.953683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:19.953741   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:19.984998   80228 cri.go:89] found id: ""
	I0814 17:40:19.985028   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.985036   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:19.985043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:19.985092   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:20.018728   80228 cri.go:89] found id: ""
	I0814 17:40:20.018753   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.018761   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:20.018766   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:20.018814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:20.050718   80228 cri.go:89] found id: ""
	I0814 17:40:20.050743   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.050757   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:20.050765   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:20.050777   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:20.101567   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:20.101602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:20.114890   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:20.114920   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:20.183926   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:20.183948   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:20.183960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:20.270195   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:20.270223   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:22.807078   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:22.820187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:22.820260   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:22.852474   80228 cri.go:89] found id: ""
	I0814 17:40:22.852504   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.852514   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:22.852522   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:22.852596   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:22.887141   80228 cri.go:89] found id: ""
	I0814 17:40:22.887167   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.887177   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:22.887184   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:22.887248   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:22.919384   80228 cri.go:89] found id: ""
	I0814 17:40:22.919417   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.919428   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:22.919436   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:22.919502   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:22.951877   80228 cri.go:89] found id: ""
	I0814 17:40:22.951897   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.951905   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:22.951910   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:22.951965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:22.987712   80228 cri.go:89] found id: ""
	I0814 17:40:22.987742   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.987752   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:22.987760   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:22.987832   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:23.025562   80228 cri.go:89] found id: ""
	I0814 17:40:23.025597   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.025608   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:23.025616   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:23.025680   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:23.058928   80228 cri.go:89] found id: ""
	I0814 17:40:23.058955   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.058962   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:23.058969   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:23.059025   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:23.096807   80228 cri.go:89] found id: ""
	I0814 17:40:23.096836   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.096847   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:23.096858   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:23.096874   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:23.148943   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:23.148977   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:23.161905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:23.161927   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:23.232119   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:23.232147   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:23.232160   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:23.320693   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:23.320731   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:25.858506   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:25.871891   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:25.871964   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:25.904732   80228 cri.go:89] found id: ""
	I0814 17:40:25.904760   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.904769   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:25.904775   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:25.904830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:25.936317   80228 cri.go:89] found id: ""
	I0814 17:40:25.936347   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.936358   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:25.936365   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:25.936427   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:25.969921   80228 cri.go:89] found id: ""
	I0814 17:40:25.969946   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.969954   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:25.969960   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:25.970009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:26.022832   80228 cri.go:89] found id: ""
	I0814 17:40:26.022862   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.022872   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:26.022880   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:26.022941   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:26.056178   80228 cri.go:89] found id: ""
	I0814 17:40:26.056206   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.056214   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:26.056224   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:26.056275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:26.086921   80228 cri.go:89] found id: ""
	I0814 17:40:26.086955   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.086966   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:26.086974   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:26.087031   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:26.120631   80228 cri.go:89] found id: ""
	I0814 17:40:26.120665   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.120677   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:26.120686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:26.120745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:26.154258   80228 cri.go:89] found id: ""
	I0814 17:40:26.154289   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.154300   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:26.154310   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:26.154324   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:26.208366   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:26.208405   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:26.222160   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:26.222192   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:26.294737   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:26.294756   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:26.294768   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:26.372870   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:26.372906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:28.908165   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:28.920754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:28.920816   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:28.953950   80228 cri.go:89] found id: ""
	I0814 17:40:28.953971   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.953978   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:28.953987   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:28.954035   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:28.985228   80228 cri.go:89] found id: ""
	I0814 17:40:28.985266   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.985278   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:28.985286   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:28.985347   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:29.016295   80228 cri.go:89] found id: ""
	I0814 17:40:29.016328   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.016336   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:29.016341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:29.016392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:29.048664   80228 cri.go:89] found id: ""
	I0814 17:40:29.048696   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.048707   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:29.048715   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:29.048778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:29.080441   80228 cri.go:89] found id: ""
	I0814 17:40:29.080466   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.080474   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:29.080520   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:29.080584   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:29.112377   80228 cri.go:89] found id: ""
	I0814 17:40:29.112407   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.112418   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:29.112426   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:29.112493   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:29.145368   80228 cri.go:89] found id: ""
	I0814 17:40:29.145395   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.145403   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:29.145409   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:29.145471   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:29.177305   80228 cri.go:89] found id: ""
	I0814 17:40:29.177333   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.177341   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:29.177350   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:29.177366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:29.232156   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:29.232197   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:29.245286   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:29.245317   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:29.322257   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:29.322286   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:29.322302   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:29.397679   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:29.397714   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:31.935264   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:31.948380   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:31.948446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:31.978898   80228 cri.go:89] found id: ""
	I0814 17:40:31.978925   80228 logs.go:276] 0 containers: []
	W0814 17:40:31.978932   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:31.978939   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:31.978989   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:32.010652   80228 cri.go:89] found id: ""
	I0814 17:40:32.010681   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.010692   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:32.010699   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:32.010767   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:32.044821   80228 cri.go:89] found id: ""
	I0814 17:40:32.044852   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.044860   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:32.044866   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:32.044915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:32.076359   80228 cri.go:89] found id: ""
	I0814 17:40:32.076388   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.076398   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:32.076406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:32.076469   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:32.107652   80228 cri.go:89] found id: ""
	I0814 17:40:32.107680   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.107692   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:32.107709   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:32.107770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:32.138445   80228 cri.go:89] found id: ""
	I0814 17:40:32.138473   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.138484   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:32.138492   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:32.138558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:32.173771   80228 cri.go:89] found id: ""
	I0814 17:40:32.173794   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.173802   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:32.173807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:32.173857   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:32.206387   80228 cri.go:89] found id: ""
	I0814 17:40:32.206418   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.206429   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:32.206441   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:32.206454   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.258114   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:32.258148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:32.271984   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:32.272009   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:32.335423   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:32.335447   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:32.335464   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:32.411155   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:32.411206   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:34.975280   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:34.988098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:34.988176   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:35.022020   80228 cri.go:89] found id: ""
	I0814 17:40:35.022047   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.022062   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:35.022071   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:35.022124   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:35.055528   80228 cri.go:89] found id: ""
	I0814 17:40:35.055568   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.055578   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:35.055586   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:35.055647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:35.088373   80228 cri.go:89] found id: ""
	I0814 17:40:35.088404   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.088415   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:35.088422   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:35.088489   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:35.123162   80228 cri.go:89] found id: ""
	I0814 17:40:35.123188   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.123198   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:35.123206   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:35.123268   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:35.160240   80228 cri.go:89] found id: ""
	I0814 17:40:35.160267   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.160277   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:35.160286   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:35.160348   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:35.196249   80228 cri.go:89] found id: ""
	I0814 17:40:35.196276   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.196285   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:35.196293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:35.196359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:35.232564   80228 cri.go:89] found id: ""
	I0814 17:40:35.232588   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.232598   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:35.232606   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:35.232671   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:35.267357   80228 cri.go:89] found id: ""
	I0814 17:40:35.267383   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.267392   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:35.267399   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:35.267412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:35.279779   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:35.279806   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:35.347748   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:35.347769   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:35.347782   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:35.427900   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:35.427932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:35.468925   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:35.468953   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:38.020581   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:38.034985   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:38.035066   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:38.070206   80228 cri.go:89] found id: ""
	I0814 17:40:38.070231   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.070240   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:38.070246   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:38.070294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:38.103859   80228 cri.go:89] found id: ""
	I0814 17:40:38.103885   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.103893   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:38.103898   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:38.103947   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:38.138247   80228 cri.go:89] found id: ""
	I0814 17:40:38.138271   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.138278   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:38.138285   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:38.138345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:38.179475   80228 cri.go:89] found id: ""
	I0814 17:40:38.179511   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.179520   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:38.179526   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:38.179578   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:38.224892   80228 cri.go:89] found id: ""
	I0814 17:40:38.224922   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.224932   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:38.224940   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:38.224996   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:38.270456   80228 cri.go:89] found id: ""
	I0814 17:40:38.270485   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.270497   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:38.270504   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:38.270569   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:38.305267   80228 cri.go:89] found id: ""
	I0814 17:40:38.305300   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.305308   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:38.305315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:38.305387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:38.336942   80228 cri.go:89] found id: ""
	I0814 17:40:38.336978   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.336989   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:38.337000   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:38.337016   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:38.388618   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:38.388651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:38.403442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:38.403472   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:38.478225   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:38.478256   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:38.478273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:38.553400   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:38.553440   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:41.089947   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:41.101989   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:41.102070   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:41.133743   80228 cri.go:89] found id: ""
	I0814 17:40:41.133767   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.133774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:41.133780   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:41.133828   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:41.169671   80228 cri.go:89] found id: ""
	I0814 17:40:41.169706   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.169714   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:41.169721   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:41.169773   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:41.203425   80228 cri.go:89] found id: ""
	I0814 17:40:41.203451   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.203459   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:41.203475   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:41.203534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:41.237031   80228 cri.go:89] found id: ""
	I0814 17:40:41.237064   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.237075   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:41.237084   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:41.237149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:41.271095   80228 cri.go:89] found id: ""
	I0814 17:40:41.271120   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.271128   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:41.271134   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:41.271190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:41.303640   80228 cri.go:89] found id: ""
	I0814 17:40:41.303672   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.303684   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:41.303692   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:41.303755   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:41.336010   80228 cri.go:89] found id: ""
	I0814 17:40:41.336047   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.336062   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:41.336071   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:41.336140   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:41.370098   80228 cri.go:89] found id: ""
	I0814 17:40:41.370133   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.370143   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:41.370154   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:41.370168   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:41.420760   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:41.420794   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:41.433651   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:41.433678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:41.506623   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:41.506644   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:41.506657   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:41.591390   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:41.591426   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:44.130649   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:44.144362   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:44.144428   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:44.178485   80228 cri.go:89] found id: ""
	I0814 17:40:44.178516   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.178527   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:44.178535   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:44.178600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:44.214231   80228 cri.go:89] found id: ""
	I0814 17:40:44.214260   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.214268   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:44.214274   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:44.214336   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:44.248483   80228 cri.go:89] found id: ""
	I0814 17:40:44.248513   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.248524   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:44.248531   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:44.248600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:44.282445   80228 cri.go:89] found id: ""
	I0814 17:40:44.282472   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.282481   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:44.282493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:44.282560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:44.315141   80228 cri.go:89] found id: ""
	I0814 17:40:44.315169   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.315190   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:44.315198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:44.315259   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:44.346756   80228 cri.go:89] found id: ""
	I0814 17:40:44.346781   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.346789   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:44.346795   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:44.346853   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:44.378143   80228 cri.go:89] found id: ""
	I0814 17:40:44.378172   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.378183   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:44.378191   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:44.378255   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:44.411526   80228 cri.go:89] found id: ""
	I0814 17:40:44.411557   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.411567   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:44.411578   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:44.411592   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:44.459873   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:44.459913   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:44.473112   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:44.473148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:44.547514   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:44.547546   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:44.547579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:44.630377   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:44.630415   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:47.173094   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:47.185854   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:47.185927   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:47.228755   80228 cri.go:89] found id: ""
	I0814 17:40:47.228781   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.228788   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:47.228795   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:47.228851   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:47.264986   80228 cri.go:89] found id: ""
	I0814 17:40:47.265020   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.265031   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:47.265037   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:47.265100   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:47.296900   80228 cri.go:89] found id: ""
	I0814 17:40:47.296929   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.296940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:47.296947   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:47.297009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:47.328120   80228 cri.go:89] found id: ""
	I0814 17:40:47.328147   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.328155   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:47.328161   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:47.328210   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:47.364147   80228 cri.go:89] found id: ""
	I0814 17:40:47.364171   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.364178   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:47.364184   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:47.364238   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:47.400466   80228 cri.go:89] found id: ""
	I0814 17:40:47.400493   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.400501   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:47.400507   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:47.400562   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:47.432681   80228 cri.go:89] found id: ""
	I0814 17:40:47.432713   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.432724   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:47.432732   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:47.432801   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:47.465466   80228 cri.go:89] found id: ""
	I0814 17:40:47.465498   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.465510   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:47.465522   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:47.465536   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:47.502076   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:47.502114   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:47.554451   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:47.554488   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:47.567658   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:47.567690   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:47.635805   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:47.635829   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:47.635844   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:50.215353   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:50.227723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:50.227795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:50.258250   80228 cri.go:89] found id: ""
	I0814 17:40:50.258276   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.258287   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:50.258296   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:50.258363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:50.291371   80228 cri.go:89] found id: ""
	I0814 17:40:50.291406   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.291416   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:50.291423   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:50.291479   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:50.321449   80228 cri.go:89] found id: ""
	I0814 17:40:50.321473   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.321481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:50.321486   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:50.321545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:50.351752   80228 cri.go:89] found id: ""
	I0814 17:40:50.351780   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.351791   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:50.351799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:50.351856   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:50.382022   80228 cri.go:89] found id: ""
	I0814 17:40:50.382050   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.382057   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:50.382063   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:50.382118   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:50.414057   80228 cri.go:89] found id: ""
	I0814 17:40:50.414083   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.414091   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:50.414098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:50.414156   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:50.447508   80228 cri.go:89] found id: ""
	I0814 17:40:50.447530   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.447537   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:50.447543   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:50.447606   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:50.487401   80228 cri.go:89] found id: ""
	I0814 17:40:50.487425   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.487434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:50.487442   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:50.487455   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:50.524404   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:50.524439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:50.578220   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:50.578256   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:50.591405   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:50.591431   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:50.657727   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:50.657750   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:50.657762   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:53.237985   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:53.250502   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:53.250572   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:53.285728   80228 cri.go:89] found id: ""
	I0814 17:40:53.285763   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.285774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:53.285784   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:53.285848   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:53.318195   80228 cri.go:89] found id: ""
	I0814 17:40:53.318231   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.318243   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:53.318252   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:53.318317   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:53.350259   80228 cri.go:89] found id: ""
	I0814 17:40:53.350291   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.350302   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:53.350310   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:53.350385   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:53.385894   80228 cri.go:89] found id: ""
	I0814 17:40:53.385920   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.385928   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:53.385934   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:53.385983   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:53.420851   80228 cri.go:89] found id: ""
	I0814 17:40:53.420878   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.420890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:53.420897   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:53.420963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:53.458332   80228 cri.go:89] found id: ""
	I0814 17:40:53.458370   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.458381   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:53.458392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:53.458465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:53.489719   80228 cri.go:89] found id: ""
	I0814 17:40:53.489750   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.489759   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:53.489765   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:53.489820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:53.522942   80228 cri.go:89] found id: ""
	I0814 17:40:53.522977   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.522988   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:53.522998   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:53.523013   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:53.599450   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:53.599492   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:53.637225   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:53.637254   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:53.688605   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:53.688647   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:53.704601   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:53.704633   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:53.775046   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.275201   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:56.288406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:56.288463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:56.322862   80228 cri.go:89] found id: ""
	I0814 17:40:56.322891   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.322899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:56.322905   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:56.322954   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:56.356214   80228 cri.go:89] found id: ""
	I0814 17:40:56.356243   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.356262   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:56.356268   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:56.356338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:56.388877   80228 cri.go:89] found id: ""
	I0814 17:40:56.388900   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.388909   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:56.388915   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:56.388967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:56.422552   80228 cri.go:89] found id: ""
	I0814 17:40:56.422577   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.422585   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:56.422590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:56.422649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:56.456995   80228 cri.go:89] found id: ""
	I0814 17:40:56.457018   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.457026   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:56.457031   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:56.457079   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:56.495745   80228 cri.go:89] found id: ""
	I0814 17:40:56.495772   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.495788   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:56.495797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:56.495868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:56.529139   80228 cri.go:89] found id: ""
	I0814 17:40:56.529171   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.529179   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:56.529185   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:56.529237   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:56.561377   80228 cri.go:89] found id: ""
	I0814 17:40:56.561406   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.561414   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:56.561424   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:56.561439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:56.601504   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:56.601537   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:56.653369   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:56.653403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:56.666117   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:56.666144   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:56.731921   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.731949   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:56.731963   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.315712   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:59.328425   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:59.328486   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:59.364056   80228 cri.go:89] found id: ""
	I0814 17:40:59.364080   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.364088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:59.364094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:59.364151   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:59.398948   80228 cri.go:89] found id: ""
	I0814 17:40:59.398971   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.398978   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:59.398984   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:59.399029   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:59.430301   80228 cri.go:89] found id: ""
	I0814 17:40:59.430327   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.430335   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:59.430341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:59.430406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:59.465278   80228 cri.go:89] found id: ""
	I0814 17:40:59.465301   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.465309   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:59.465315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:59.465372   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:59.497544   80228 cri.go:89] found id: ""
	I0814 17:40:59.497575   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.497586   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:59.497595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:59.497659   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:59.529463   80228 cri.go:89] found id: ""
	I0814 17:40:59.529494   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.529506   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:59.529513   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:59.529587   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:59.562448   80228 cri.go:89] found id: ""
	I0814 17:40:59.562477   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.562487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:59.562495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:59.562609   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:59.594059   80228 cri.go:89] found id: ""
	I0814 17:40:59.594089   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.594103   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:59.594112   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:59.594123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.672139   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:59.672172   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:59.710714   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:59.710743   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:59.762645   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:59.762676   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:59.776006   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:59.776033   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:59.838187   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:02.338964   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:02.351381   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:02.351460   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:02.383206   80228 cri.go:89] found id: ""
	I0814 17:41:02.383235   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.383244   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:02.383250   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:02.383310   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:02.417016   80228 cri.go:89] found id: ""
	I0814 17:41:02.417042   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.417049   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:02.417055   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:02.417111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:02.451936   80228 cri.go:89] found id: ""
	I0814 17:41:02.451964   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.451974   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:02.451982   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:02.452042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:02.489896   80228 cri.go:89] found id: ""
	I0814 17:41:02.489927   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.489937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:02.489945   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:02.490011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:02.524273   80228 cri.go:89] found id: ""
	I0814 17:41:02.524308   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.524339   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:02.524346   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:02.524409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:02.558813   80228 cri.go:89] found id: ""
	I0814 17:41:02.558842   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.558850   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:02.558861   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:02.558917   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:02.592704   80228 cri.go:89] found id: ""
	I0814 17:41:02.592733   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.592747   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:02.592753   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:02.592818   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:02.625250   80228 cri.go:89] found id: ""
	I0814 17:41:02.625277   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.625288   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:02.625299   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:02.625312   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:02.677577   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:02.677613   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:02.691407   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:02.691439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:02.756797   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:02.756869   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:02.756888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:02.830803   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:02.830842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.370085   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:05.385272   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:05.385342   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:05.421775   80228 cri.go:89] found id: ""
	I0814 17:41:05.421799   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.421806   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:05.421812   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:05.421860   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:05.457054   80228 cri.go:89] found id: ""
	I0814 17:41:05.457083   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.457093   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:05.457100   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:05.457153   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:05.489290   80228 cri.go:89] found id: ""
	I0814 17:41:05.489330   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.489338   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:05.489345   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:05.489392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:05.527066   80228 cri.go:89] found id: ""
	I0814 17:41:05.527091   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.527098   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:05.527105   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:05.527155   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:05.563882   80228 cri.go:89] found id: ""
	I0814 17:41:05.563915   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.563925   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:05.563931   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:05.563982   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:05.601837   80228 cri.go:89] found id: ""
	I0814 17:41:05.601863   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.601871   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:05.601879   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:05.601940   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:05.633503   80228 cri.go:89] found id: ""
	I0814 17:41:05.633531   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.633539   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:05.633545   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:05.633615   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:05.668281   80228 cri.go:89] found id: ""
	I0814 17:41:05.668312   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.668324   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:05.668335   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:05.668349   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:05.747214   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:05.747249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.784408   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:05.784441   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:05.835067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:05.835103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:05.847938   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:05.847966   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:05.917404   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:08.417559   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:08.431092   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:08.431165   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:08.465357   80228 cri.go:89] found id: ""
	I0814 17:41:08.465515   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.465543   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:08.465560   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:08.465675   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:08.499085   80228 cri.go:89] found id: ""
	I0814 17:41:08.499114   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.499123   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:08.499129   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:08.499180   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:08.533881   80228 cri.go:89] found id: ""
	I0814 17:41:08.533909   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.533917   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:08.533922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:08.533972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:08.570503   80228 cri.go:89] found id: ""
	I0814 17:41:08.570549   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.570560   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:08.570572   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:08.570649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:08.602557   80228 cri.go:89] found id: ""
	I0814 17:41:08.602599   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.602610   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:08.602691   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:08.602785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:08.636174   80228 cri.go:89] found id: ""
	I0814 17:41:08.636199   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.636206   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:08.636213   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:08.636261   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:08.672774   80228 cri.go:89] found id: ""
	I0814 17:41:08.672804   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.672815   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:08.672823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:08.672890   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:08.705535   80228 cri.go:89] found id: ""
	I0814 17:41:08.705590   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.705605   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:08.705622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:08.705641   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:08.744315   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:08.744341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:08.794632   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:08.794666   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:08.808089   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:08.808117   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:08.876417   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:08.876436   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:08.876452   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:11.458562   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:11.470905   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:11.470965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:11.505992   80228 cri.go:89] found id: ""
	I0814 17:41:11.506023   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.506036   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:11.506044   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:11.506112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:11.540893   80228 cri.go:89] found id: ""
	I0814 17:41:11.540922   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.540932   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:11.540945   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:11.541001   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:11.575423   80228 cri.go:89] found id: ""
	I0814 17:41:11.575448   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.575455   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:11.575462   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:11.575520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:11.608126   80228 cri.go:89] found id: ""
	I0814 17:41:11.608155   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.608164   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:11.608171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:11.608222   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:11.640165   80228 cri.go:89] found id: ""
	I0814 17:41:11.640190   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.640198   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:11.640204   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:11.640263   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:11.674425   80228 cri.go:89] found id: ""
	I0814 17:41:11.674446   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.674455   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:11.674460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:11.674513   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:11.707448   80228 cri.go:89] found id: ""
	I0814 17:41:11.707477   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.707487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:11.707493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:11.707555   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:11.744309   80228 cri.go:89] found id: ""
	I0814 17:41:11.744338   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.744346   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:11.744363   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:11.744375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:11.824165   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:11.824196   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:11.862013   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:11.862039   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:11.913862   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:11.913902   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:11.927147   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:11.927178   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:11.998403   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.498590   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:14.512847   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:14.512938   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:14.549255   80228 cri.go:89] found id: ""
	I0814 17:41:14.549288   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.549306   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:14.549316   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:14.549382   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:14.588917   80228 cri.go:89] found id: ""
	I0814 17:41:14.588948   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.588956   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:14.588963   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:14.589012   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:14.622581   80228 cri.go:89] found id: ""
	I0814 17:41:14.622611   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.622621   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:14.622628   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:14.622693   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:14.656029   80228 cri.go:89] found id: ""
	I0814 17:41:14.656056   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.656064   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:14.656070   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:14.656117   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:14.687502   80228 cri.go:89] found id: ""
	I0814 17:41:14.687527   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.687536   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:14.687541   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:14.687614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:14.720682   80228 cri.go:89] found id: ""
	I0814 17:41:14.720713   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.720721   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:14.720728   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:14.720778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:14.752482   80228 cri.go:89] found id: ""
	I0814 17:41:14.752511   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.752520   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:14.752525   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:14.752577   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:14.792980   80228 cri.go:89] found id: ""
	I0814 17:41:14.793004   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.793014   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:14.793026   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:14.793042   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:14.845259   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:14.845297   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:14.858530   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:14.858556   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:14.931025   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.931054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:14.931067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:15.008081   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:15.008115   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.544873   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:17.557699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:17.557791   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:17.600314   80228 cri.go:89] found id: ""
	I0814 17:41:17.600347   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.600360   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:17.600370   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:17.600441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:17.634873   80228 cri.go:89] found id: ""
	I0814 17:41:17.634902   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.634914   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:17.634923   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:17.634986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:17.670521   80228 cri.go:89] found id: ""
	I0814 17:41:17.670552   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.670563   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:17.670571   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:17.670647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:17.705587   80228 cri.go:89] found id: ""
	I0814 17:41:17.705612   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.705626   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:17.705632   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:17.705682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:17.768178   80228 cri.go:89] found id: ""
	I0814 17:41:17.768207   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.768218   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:17.768226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:17.768290   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:17.804692   80228 cri.go:89] found id: ""
	I0814 17:41:17.804721   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.804729   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:17.804735   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:17.804795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:17.847994   80228 cri.go:89] found id: ""
	I0814 17:41:17.848030   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.848041   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:17.848052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:17.848122   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:17.883905   80228 cri.go:89] found id: ""
	I0814 17:41:17.883935   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.883944   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:17.883953   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.883965   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.931481   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.931522   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.983315   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.983363   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.996941   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.996981   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:18.067254   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:18.067279   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:18.067295   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:20.642099   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.655941   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.656014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.692525   80228 cri.go:89] found id: ""
	I0814 17:41:20.692554   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.692565   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:20.692577   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.692634   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.727721   80228 cri.go:89] found id: ""
	I0814 17:41:20.727755   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.727769   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:20.727778   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.727845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.770441   80228 cri.go:89] found id: ""
	I0814 17:41:20.770471   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.770481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:20.770488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.770550   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.807932   80228 cri.go:89] found id: ""
	I0814 17:41:20.807961   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.807968   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:20.807975   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.808030   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.849919   80228 cri.go:89] found id: ""
	I0814 17:41:20.849944   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.849963   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:20.849970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.850045   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.887351   80228 cri.go:89] found id: ""
	I0814 17:41:20.887382   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.887393   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:20.887401   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.887465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.921284   80228 cri.go:89] found id: ""
	I0814 17:41:20.921310   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.921321   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.921328   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:20.921409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:20.955238   80228 cri.go:89] found id: ""
	I0814 17:41:20.955267   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.955278   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:20.955288   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.955314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:21.024544   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:21.024565   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.024579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:21.103987   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.104019   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.145515   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.145550   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.197307   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.197346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:23.712584   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:23.726467   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:23.726545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:23.762871   80228 cri.go:89] found id: ""
	I0814 17:41:23.762906   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.762916   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:23.762922   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:23.762972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:23.800068   80228 cri.go:89] found id: ""
	I0814 17:41:23.800096   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.800105   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:23.800113   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:23.800173   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:23.834913   80228 cri.go:89] found id: ""
	I0814 17:41:23.834945   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.834956   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:23.834963   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:23.835022   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:23.871196   80228 cri.go:89] found id: ""
	I0814 17:41:23.871222   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.871233   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:23.871240   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:23.871294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:23.907830   80228 cri.go:89] found id: ""
	I0814 17:41:23.907854   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.907862   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:23.907868   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:23.907926   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:23.941110   80228 cri.go:89] found id: ""
	I0814 17:41:23.941133   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.941141   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:23.941146   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:23.941197   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:23.973602   80228 cri.go:89] found id: ""
	I0814 17:41:23.973631   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.973649   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:23.973655   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:23.973710   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:24.007398   80228 cri.go:89] found id: ""
	I0814 17:41:24.007436   80228 logs.go:276] 0 containers: []
	W0814 17:41:24.007450   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:24.007462   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:24.007478   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:24.061830   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:24.061867   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:24.075012   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:24.075046   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:24.148666   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:24.148692   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.148703   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.230208   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:24.230248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:26.776204   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:26.789057   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:26.789132   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:26.822531   80228 cri.go:89] found id: ""
	I0814 17:41:26.822564   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.822575   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:26.822590   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:26.822651   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:26.855314   80228 cri.go:89] found id: ""
	I0814 17:41:26.855353   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.855365   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:26.855372   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:26.855434   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:26.889389   80228 cri.go:89] found id: ""
	I0814 17:41:26.889413   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.889421   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:26.889427   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:26.889485   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:26.925478   80228 cri.go:89] found id: ""
	I0814 17:41:26.925500   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.925508   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:26.925514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:26.925560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:26.957012   80228 cri.go:89] found id: ""
	I0814 17:41:26.957042   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.957053   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:26.957061   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:26.957114   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:26.989358   80228 cri.go:89] found id: ""
	I0814 17:41:26.989388   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.989399   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:26.989406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:26.989468   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:27.024761   80228 cri.go:89] found id: ""
	I0814 17:41:27.024786   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.024805   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:27.024830   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:27.024895   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:27.059172   80228 cri.go:89] found id: ""
	I0814 17:41:27.059204   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.059215   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:27.059226   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:27.059240   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.096123   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:27.096151   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:27.147689   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:27.147719   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:27.161454   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:27.161483   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:27.234644   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:27.234668   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:27.234680   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:29.817428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:29.831731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:29.831811   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:29.868531   80228 cri.go:89] found id: ""
	I0814 17:41:29.868567   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.868577   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:29.868585   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:29.868657   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:29.913578   80228 cri.go:89] found id: ""
	I0814 17:41:29.913602   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.913611   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:29.913617   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:29.913677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:29.963916   80228 cri.go:89] found id: ""
	I0814 17:41:29.963939   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.963946   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:29.963952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:29.964011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:30.016735   80228 cri.go:89] found id: ""
	I0814 17:41:30.016763   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.016773   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:30.016781   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:30.016841   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:30.048852   80228 cri.go:89] found id: ""
	I0814 17:41:30.048880   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.048890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:30.048898   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:30.048960   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:30.080291   80228 cri.go:89] found id: ""
	I0814 17:41:30.080324   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.080335   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:30.080343   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:30.080506   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:30.113876   80228 cri.go:89] found id: ""
	I0814 17:41:30.113904   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.113914   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:30.113921   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:30.113984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:30.147568   80228 cri.go:89] found id: ""
	I0814 17:41:30.147594   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.147604   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:30.147614   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:30.147627   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:30.197596   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:30.197630   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:30.210576   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:30.210602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:30.277711   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:30.277731   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:30.277746   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:30.356556   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:30.356590   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:32.892697   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:32.909435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:32.909497   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:32.945055   80228 cri.go:89] found id: ""
	I0814 17:41:32.945080   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.945088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:32.945094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:32.945150   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:32.979266   80228 cri.go:89] found id: ""
	I0814 17:41:32.979294   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.979305   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:32.979312   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:32.979398   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:33.014260   80228 cri.go:89] found id: ""
	I0814 17:41:33.014286   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.014294   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:33.014299   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:33.014351   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:33.047590   80228 cri.go:89] found id: ""
	I0814 17:41:33.047622   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.047633   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:33.047646   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:33.047711   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:33.081258   80228 cri.go:89] found id: ""
	I0814 17:41:33.081294   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.081328   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:33.081337   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:33.081403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:33.112209   80228 cri.go:89] found id: ""
	I0814 17:41:33.112237   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.112247   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:33.112254   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:33.112318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:33.143854   80228 cri.go:89] found id: ""
	I0814 17:41:33.143892   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.143904   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:33.143913   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:33.143977   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:33.175147   80228 cri.go:89] found id: ""
	I0814 17:41:33.175190   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.175201   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:33.175212   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:33.175226   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:33.212877   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:33.212908   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:33.268067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:33.268103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:33.281357   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:33.281386   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:33.350233   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:33.350257   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:33.350269   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:35.929498   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:35.942290   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:35.942354   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:35.975782   80228 cri.go:89] found id: ""
	I0814 17:41:35.975809   80228 logs.go:276] 0 containers: []
	W0814 17:41:35.975818   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:35.975826   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:35.975886   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:36.008165   80228 cri.go:89] found id: ""
	I0814 17:41:36.008191   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.008200   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:36.008206   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:36.008262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:36.044912   80228 cri.go:89] found id: ""
	I0814 17:41:36.044937   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.044945   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:36.044954   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:36.045002   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:36.078068   80228 cri.go:89] found id: ""
	I0814 17:41:36.078096   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.078108   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:36.078116   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:36.078179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:36.110429   80228 cri.go:89] found id: ""
	I0814 17:41:36.110456   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.110467   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:36.110480   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:36.110540   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:36.142086   80228 cri.go:89] found id: ""
	I0814 17:41:36.142111   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.142119   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:36.142125   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:36.142186   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:36.172738   80228 cri.go:89] found id: ""
	I0814 17:41:36.172761   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.172769   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:36.172775   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:36.172831   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:36.204345   80228 cri.go:89] found id: ""
	I0814 17:41:36.204368   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.204376   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:36.204388   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:36.204403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:36.216667   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:36.216689   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:36.279509   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:36.279528   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:36.279540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:36.360411   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:36.360447   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:36.398193   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:36.398230   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:38.952415   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:38.968484   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:38.968554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:39.002450   80228 cri.go:89] found id: ""
	I0814 17:41:39.002479   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.002486   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:39.002493   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:39.002551   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:39.035840   80228 cri.go:89] found id: ""
	I0814 17:41:39.035868   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.035876   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:39.035882   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:39.035934   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:39.069900   80228 cri.go:89] found id: ""
	I0814 17:41:39.069929   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.069940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:39.069946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:39.069999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:39.104657   80228 cri.go:89] found id: ""
	I0814 17:41:39.104681   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.104689   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:39.104695   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:39.104751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:39.137279   80228 cri.go:89] found id: ""
	I0814 17:41:39.137312   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.137322   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:39.137330   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:39.137403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:39.170377   80228 cri.go:89] found id: ""
	I0814 17:41:39.170414   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.170424   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:39.170430   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:39.170491   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:39.205742   80228 cri.go:89] found id: ""
	I0814 17:41:39.205779   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.205790   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:39.205796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:39.205850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:39.239954   80228 cri.go:89] found id: ""
	I0814 17:41:39.239979   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.239987   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:39.239994   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:39.240011   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:39.276587   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:39.276619   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:39.329286   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:39.329322   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:39.342232   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:39.342257   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:39.411043   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:39.411063   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:39.411075   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:41.994479   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:42.007736   80228 kubeadm.go:597] duration metric: took 4m4.488869114s to restartPrimaryControlPlane
	W0814 17:41:42.007822   80228 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:42.007871   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:46.541593   80228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.533697889s)
	I0814 17:41:46.541676   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:46.556181   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:41:46.565943   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:41:46.575481   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:41:46.575501   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:41:46.575549   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:41:46.585143   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:41:46.585202   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:41:46.595157   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:41:46.604539   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:41:46.604600   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:41:46.613345   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.622186   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:41:46.622242   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.631221   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:41:46.640649   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:41:46.640706   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:41:46.650161   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:41:46.724104   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:41:46.724182   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:41:46.860463   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:41:46.860606   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:41:46.860725   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:41:47.036697   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:41:47.038444   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:41:47.038561   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:41:47.038670   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:41:47.038775   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:41:47.038860   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:41:47.038973   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:41:47.039067   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:41:47.039172   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:41:47.039256   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:41:47.039359   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:41:47.039456   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:41:47.039516   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:41:47.039587   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:41:47.278696   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:41:47.664300   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:41:47.988137   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:41:48.076560   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:41:48.093447   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:41:48.094656   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:41:48.094793   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:41:48.253225   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:41:48.255034   80228 out.go:204]   - Booting up control plane ...
	I0814 17:41:48.255160   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:41:48.259041   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:41:48.260074   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:41:48.260862   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:41:48.262910   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:42:28.263217   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:42:28.263629   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:28.263853   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:33.264169   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:33.264403   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:43.264648   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:43.264858   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:03.265508   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:03.265720   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:43.267316   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:43.267596   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:43.267623   80228 kubeadm.go:310] 
	I0814 17:43:43.267680   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:43:43.267757   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:43:43.267778   80228 kubeadm.go:310] 
	I0814 17:43:43.267839   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:43:43.267894   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:43:43.268029   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:43:43.268044   80228 kubeadm.go:310] 
	I0814 17:43:43.268190   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:43:43.268247   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:43:43.268296   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:43:43.268305   80228 kubeadm.go:310] 
	I0814 17:43:43.268446   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:43:43.268561   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:43:43.268572   80228 kubeadm.go:310] 
	I0814 17:43:43.268741   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:43:43.268907   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:43:43.269021   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:43:43.269120   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:43:43.269131   80228 kubeadm.go:310] 
	I0814 17:43:43.269560   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:43:43.269642   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:43:43.269698   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 17:43:43.269809   80228 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 17:43:43.269853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:43:43.733975   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:43.748632   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:43:43.758474   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:43:43.758493   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:43:43.758543   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:43:43.767721   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:43:43.767777   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:43:43.777259   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:43:43.786562   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:43:43.786623   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:43:43.795253   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.803677   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:43:43.803733   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.812416   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:43:43.821020   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:43:43.821075   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:43:43.829709   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:43:44.024836   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:45:40.060126   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:45:40.060266   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 17:45:40.061931   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:45:40.062003   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:45:40.062110   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:45:40.062231   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:45:40.062360   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:45:40.062459   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:45:40.063940   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:45:40.064041   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:45:40.064124   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:45:40.064230   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:45:40.064305   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:45:40.064398   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:45:40.064471   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:45:40.064550   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:45:40.064632   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:45:40.064712   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:45:40.064798   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:45:40.064857   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:45:40.064913   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:45:40.064956   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:45:40.065040   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:45:40.065146   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:45:40.065222   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:45:40.065366   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:45:40.065490   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:45:40.065547   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:45:40.065648   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:45:40.068108   80228 out.go:204]   - Booting up control plane ...
	I0814 17:45:40.068211   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:45:40.068294   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:45:40.068395   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:45:40.068522   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:45:40.068675   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:45:40.068751   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:45:40.068843   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069048   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069141   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069393   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069510   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069756   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069823   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069982   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070051   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.070204   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070211   80228 kubeadm.go:310] 
	I0814 17:45:40.070244   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:45:40.070291   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:45:40.070299   80228 kubeadm.go:310] 
	I0814 17:45:40.070337   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:45:40.070379   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:45:40.070504   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:45:40.070521   80228 kubeadm.go:310] 
	I0814 17:45:40.070673   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:45:40.070723   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:45:40.070764   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:45:40.070776   80228 kubeadm.go:310] 
	I0814 17:45:40.070876   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:45:40.070945   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:45:40.070953   80228 kubeadm.go:310] 
	I0814 17:45:40.071045   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:45:40.071151   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:45:40.071246   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:45:40.071363   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:45:40.071453   80228 kubeadm.go:310] 
	I0814 17:45:40.071481   80228 kubeadm.go:394] duration metric: took 8m2.599023024s to StartCluster
	I0814 17:45:40.071554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:45:40.071617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:45:40.115691   80228 cri.go:89] found id: ""
	I0814 17:45:40.115719   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.115727   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:45:40.115734   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:45:40.115798   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:45:40.155537   80228 cri.go:89] found id: ""
	I0814 17:45:40.155566   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.155574   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:45:40.155580   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:45:40.155645   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:45:40.189570   80228 cri.go:89] found id: ""
	I0814 17:45:40.189604   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.189616   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:45:40.189625   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:45:40.189708   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:45:40.222496   80228 cri.go:89] found id: ""
	I0814 17:45:40.222521   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.222528   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:45:40.222533   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:45:40.222590   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:45:40.255095   80228 cri.go:89] found id: ""
	I0814 17:45:40.255129   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.255142   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:45:40.255151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:45:40.255233   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:45:40.290297   80228 cri.go:89] found id: ""
	I0814 17:45:40.290326   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.290341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:45:40.290348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:45:40.290396   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:45:40.326660   80228 cri.go:89] found id: ""
	I0814 17:45:40.326685   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.326695   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:45:40.326701   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:45:40.326764   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:45:40.359867   80228 cri.go:89] found id: ""
	I0814 17:45:40.359896   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.359908   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:45:40.359918   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:45:40.359933   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:45:40.397513   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:45:40.397543   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:45:40.451744   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:45:40.451778   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:45:40.475817   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:45:40.475843   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:45:40.575913   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:45:40.575933   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:45:40.575946   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0814 17:45:40.683455   80228 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 17:45:40.683509   80228 out.go:239] * 
	* 
	W0814 17:45:40.683587   80228 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.683623   80228 out.go:239] * 
	* 
	W0814 17:45:40.684431   80228 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 17:45:40.688043   80228 out.go:177] 
	W0814 17:45:40.689238   80228 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.689291   80228 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 17:45:40.689318   80228 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 17:45:40.690913   80228 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-505584 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 2 (240.613873ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-505584 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-505584 logs -n 25: (1.590731687s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-984053 sudo cat                              | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo find                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo crio                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-984053                                       | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-005029 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | disable-driver-mounts-005029                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:30 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-545149             | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-309673            | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-885666  | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC | 14 Aug 24 17:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC |                     |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-545149                  | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-505584        | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC | 14 Aug 24 17:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-309673                 | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-885666       | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:42 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-505584             | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 17:33:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 17:33:46.321266   80228 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:33:46.321519   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321529   80228 out.go:304] Setting ErrFile to fd 2...
	I0814 17:33:46.321533   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321691   80228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:33:46.322185   80228 out.go:298] Setting JSON to false
	I0814 17:33:46.323102   80228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8170,"bootTime":1723648656,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:33:46.323161   80228 start.go:139] virtualization: kvm guest
	I0814 17:33:46.325361   80228 out.go:177] * [old-k8s-version-505584] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:33:46.326668   80228 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:33:46.326679   80228 notify.go:220] Checking for updates...
	I0814 17:33:46.329217   80228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:33:46.330813   80228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:33:46.332019   80228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:33:46.333264   80228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:33:46.334480   80228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:33:46.336108   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:33:46.336521   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.336564   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.351154   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0814 17:33:46.351563   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.352042   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.352061   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.352395   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.352567   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.354248   80228 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 17:33:46.355547   80228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:33:46.355834   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.355865   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.370976   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
	I0814 17:33:46.371452   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.371977   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.372008   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.372376   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.372624   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.407797   80228 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 17:33:46.408905   80228 start.go:297] selected driver: kvm2
	I0814 17:33:46.408918   80228 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.409022   80228 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:33:46.409677   80228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.409753   80228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:33:46.424801   80228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:33:46.425288   80228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:33:46.425338   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:33:46.425349   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:33:46.425396   80228 start.go:340] cluster config:
	{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.425518   80228 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.427224   80228 out.go:177] * Starting "old-k8s-version-505584" primary control-plane node in "old-k8s-version-505584" cluster
	I0814 17:33:46.428485   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:33:46.428516   80228 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:33:46.428523   80228 cache.go:56] Caching tarball of preloaded images
	I0814 17:33:46.428589   80228 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:33:46.428600   80228 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 17:33:46.428727   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:33:46.428899   80228 start.go:360] acquireMachinesLock for old-k8s-version-505584: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:33:47.579625   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:50.651557   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:56.731587   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:59.803787   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:05.883582   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:08.959564   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:15.035593   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:18.107634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:24.187624   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:27.259634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:33.339631   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:36.411675   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:42.491633   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:45.563609   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:51.643582   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:54.715620   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:00.795564   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:03.867637   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:09.947634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:13.019646   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:19.099578   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:22.171640   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:28.251634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:31.323645   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:37.403627   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:40.475635   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:46.555591   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:49.627635   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:55.707632   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:58.779532   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:04.859619   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:07.931632   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:14.011612   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:17.083624   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:23.163638   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:26.235638   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:29.240279   79521 start.go:364] duration metric: took 4m23.88398072s to acquireMachinesLock for "embed-certs-309673"
	I0814 17:36:29.240341   79521 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:36:29.240351   79521 fix.go:54] fixHost starting: 
	I0814 17:36:29.240703   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:36:29.240730   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:36:29.255901   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0814 17:36:29.256372   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:36:29.256816   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:36:29.256839   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:36:29.257153   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:36:29.257337   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:29.257518   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:36:29.259382   79521 fix.go:112] recreateIfNeeded on embed-certs-309673: state=Stopped err=<nil>
	I0814 17:36:29.259419   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	W0814 17:36:29.259583   79521 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:36:29.261931   79521 out.go:177] * Restarting existing kvm2 VM for "embed-certs-309673" ...
	I0814 17:36:29.263301   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Start
	I0814 17:36:29.263487   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring networks are active...
	I0814 17:36:29.264251   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring network default is active
	I0814 17:36:29.264797   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring network mk-embed-certs-309673 is active
	I0814 17:36:29.265331   79521 main.go:141] libmachine: (embed-certs-309673) Getting domain xml...
	I0814 17:36:29.266055   79521 main.go:141] libmachine: (embed-certs-309673) Creating domain...
	I0814 17:36:29.237663   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:36:29.237704   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:36:29.238088   79367 buildroot.go:166] provisioning hostname "no-preload-545149"
	I0814 17:36:29.238131   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:36:29.238337   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:36:29.240159   79367 machine.go:97] duration metric: took 4m37.421920583s to provisionDockerMachine
	I0814 17:36:29.240195   79367 fix.go:56] duration metric: took 4m37.443181113s for fixHost
	I0814 17:36:29.240202   79367 start.go:83] releasing machines lock for "no-preload-545149", held for 4m37.443414836s
	W0814 17:36:29.240223   79367 start.go:714] error starting host: provision: host is not running
	W0814 17:36:29.240348   79367 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0814 17:36:29.240358   79367 start.go:729] Will try again in 5 seconds ...
	I0814 17:36:30.482377   79521 main.go:141] libmachine: (embed-certs-309673) Waiting to get IP...
	I0814 17:36:30.483405   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:30.483750   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:30.483837   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:30.483729   80776 retry.go:31] will retry after 224.900105ms: waiting for machine to come up
	I0814 17:36:30.710259   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:30.710718   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:30.710748   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:30.710679   80776 retry.go:31] will retry after 322.892012ms: waiting for machine to come up
	I0814 17:36:31.035358   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.035807   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.035835   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.035757   80776 retry.go:31] will retry after 374.226901ms: waiting for machine to come up
	I0814 17:36:31.411228   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.411783   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.411813   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.411717   80776 retry.go:31] will retry after 472.149905ms: waiting for machine to come up
	I0814 17:36:31.885265   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.885787   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.885810   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.885757   80776 retry.go:31] will retry after 676.063343ms: waiting for machine to come up
	I0814 17:36:32.563206   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:32.563711   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:32.563745   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:32.563658   80776 retry.go:31] will retry after 904.634039ms: waiting for machine to come up
	I0814 17:36:33.469832   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:33.470255   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:33.470278   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:33.470206   80776 retry.go:31] will retry after 1.132974911s: waiting for machine to come up
	I0814 17:36:34.605040   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:34.605542   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:34.605576   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:34.605498   80776 retry.go:31] will retry after 1.210457498s: waiting for machine to come up
	I0814 17:36:34.242590   79367 start.go:360] acquireMachinesLock for no-preload-545149: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:36:35.817809   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:35.818152   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:35.818177   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:35.818111   80776 retry.go:31] will retry after 1.275236618s: waiting for machine to come up
	I0814 17:36:37.095551   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:37.095975   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:37.096001   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:37.095937   80776 retry.go:31] will retry after 1.716925001s: waiting for machine to come up
	I0814 17:36:38.814927   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:38.815916   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:38.815943   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:38.815864   80776 retry.go:31] will retry after 2.040428036s: waiting for machine to come up
	I0814 17:36:40.858640   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:40.859157   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:40.859188   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:40.859108   80776 retry.go:31] will retry after 2.259949864s: waiting for machine to come up
	I0814 17:36:43.120436   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:43.120913   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:43.120939   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:43.120879   80776 retry.go:31] will retry after 3.64334808s: waiting for machine to come up
	I0814 17:36:47.975977   79871 start.go:364] duration metric: took 3m52.18367446s to acquireMachinesLock for "default-k8s-diff-port-885666"
	I0814 17:36:47.976049   79871 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:36:47.976064   79871 fix.go:54] fixHost starting: 
	I0814 17:36:47.976457   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:36:47.976492   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:36:47.993513   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0814 17:36:47.993940   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:36:47.994480   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:36:47.994504   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:36:47.994815   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:36:47.995005   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:36:47.995181   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:36:47.996716   79871 fix.go:112] recreateIfNeeded on default-k8s-diff-port-885666: state=Stopped err=<nil>
	I0814 17:36:47.996755   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	W0814 17:36:47.996923   79871 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:36:47.998967   79871 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-885666" ...
	I0814 17:36:46.766908   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.767458   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has current primary IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.767500   79521 main.go:141] libmachine: (embed-certs-309673) Found IP for machine: 192.168.61.2
	I0814 17:36:46.767516   79521 main.go:141] libmachine: (embed-certs-309673) Reserving static IP address...
	I0814 17:36:46.767974   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "embed-certs-309673", mac: "52:54:00:ed:61:4e", ip: "192.168.61.2"} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.767993   79521 main.go:141] libmachine: (embed-certs-309673) Reserved static IP address: 192.168.61.2
	I0814 17:36:46.768006   79521 main.go:141] libmachine: (embed-certs-309673) DBG | skip adding static IP to network mk-embed-certs-309673 - found existing host DHCP lease matching {name: "embed-certs-309673", mac: "52:54:00:ed:61:4e", ip: "192.168.61.2"}
	I0814 17:36:46.768017   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Getting to WaitForSSH function...
	I0814 17:36:46.768023   79521 main.go:141] libmachine: (embed-certs-309673) Waiting for SSH to be available...
	I0814 17:36:46.770187   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.770517   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.770548   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.770612   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Using SSH client type: external
	I0814 17:36:46.770643   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa (-rw-------)
	I0814 17:36:46.770672   79521 main.go:141] libmachine: (embed-certs-309673) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:36:46.770697   79521 main.go:141] libmachine: (embed-certs-309673) DBG | About to run SSH command:
	I0814 17:36:46.770703   79521 main.go:141] libmachine: (embed-certs-309673) DBG | exit 0
	I0814 17:36:46.895078   79521 main.go:141] libmachine: (embed-certs-309673) DBG | SSH cmd err, output: <nil>: 
	I0814 17:36:46.895444   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetConfigRaw
	I0814 17:36:46.896033   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:46.898715   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.899085   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.899117   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.899434   79521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/config.json ...
	I0814 17:36:46.899701   79521 machine.go:94] provisionDockerMachine start ...
	I0814 17:36:46.899723   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:46.899906   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:46.901985   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.902244   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.902268   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.902398   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:46.902564   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:46.902707   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:46.902829   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:46.902966   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:46.903201   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:46.903213   79521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:36:47.007289   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:36:47.007313   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.007589   79521 buildroot.go:166] provisioning hostname "embed-certs-309673"
	I0814 17:36:47.007608   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.007802   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.010311   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.010631   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.010670   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.010805   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.010956   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.011067   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.011160   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.011269   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.011455   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.011467   79521 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-309673 && echo "embed-certs-309673" | sudo tee /etc/hostname
	I0814 17:36:47.128575   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-309673
	
	I0814 17:36:47.128601   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.131125   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.131464   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.131493   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.131655   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.131970   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.132146   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.132286   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.132457   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.132614   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.132630   79521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-309673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-309673/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-309673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:36:47.247426   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:36:47.247469   79521 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:36:47.247486   79521 buildroot.go:174] setting up certificates
	I0814 17:36:47.247496   79521 provision.go:84] configureAuth start
	I0814 17:36:47.247506   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.247768   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:47.250616   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.250993   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.251018   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.251148   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.253149   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.253436   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.253465   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.253551   79521 provision.go:143] copyHostCerts
	I0814 17:36:47.253616   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:36:47.253628   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:36:47.253703   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:36:47.253817   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:36:47.253835   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:36:47.253875   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:36:47.253952   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:36:47.253962   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:36:47.253994   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:36:47.254060   79521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.embed-certs-309673 san=[127.0.0.1 192.168.61.2 embed-certs-309673 localhost minikube]
	I0814 17:36:47.338831   79521 provision.go:177] copyRemoteCerts
	I0814 17:36:47.338892   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:36:47.338921   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.341582   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.341897   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.341915   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.342053   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.342237   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.342374   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.342497   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.424777   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:36:47.446682   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:36:47.467672   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:36:47.488423   79521 provision.go:87] duration metric: took 240.914172ms to configureAuth
	I0814 17:36:47.488453   79521 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:36:47.488645   79521 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:36:47.488733   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.491453   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.491793   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.491816   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.492028   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.492216   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.492351   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.492479   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.492716   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.492909   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.492931   79521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:36:47.746210   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:36:47.746248   79521 machine.go:97] duration metric: took 846.530779ms to provisionDockerMachine
	I0814 17:36:47.746260   79521 start.go:293] postStartSetup for "embed-certs-309673" (driver="kvm2")
	I0814 17:36:47.746274   79521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:36:47.746297   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.746659   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:36:47.746694   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.749342   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.749674   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.749702   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.749831   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.750004   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.750126   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.750272   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.833279   79521 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:36:47.837076   79521 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:36:47.837099   79521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:36:47.837183   79521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:36:47.837269   79521 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:36:47.837387   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:36:47.845640   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:36:47.866978   79521 start.go:296] duration metric: took 120.70557ms for postStartSetup
	I0814 17:36:47.867012   79521 fix.go:56] duration metric: took 18.626661733s for fixHost
	I0814 17:36:47.867030   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.869687   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.870016   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.870046   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.870220   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.870399   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.870660   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.870827   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.870999   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.871209   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.871221   79521 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:36:47.975817   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657007.950271601
	
	I0814 17:36:47.975848   79521 fix.go:216] guest clock: 1723657007.950271601
	I0814 17:36:47.975860   79521 fix.go:229] Guest: 2024-08-14 17:36:47.950271601 +0000 UTC Remote: 2024-08-14 17:36:47.867016056 +0000 UTC m=+282.648397849 (delta=83.255545ms)
	I0814 17:36:47.975889   79521 fix.go:200] guest clock delta is within tolerance: 83.255545ms
	I0814 17:36:47.975896   79521 start.go:83] releasing machines lock for "embed-certs-309673", held for 18.735575335s
	I0814 17:36:47.975931   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.976213   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:47.978934   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.979457   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.979483   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.979625   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980134   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980303   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980382   79521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:36:47.980428   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.980574   79521 ssh_runner.go:195] Run: cat /version.json
	I0814 17:36:47.980603   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.983247   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983557   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983649   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.983687   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983828   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.984032   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.984042   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.984063   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.984183   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.984232   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.984320   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.984412   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.984467   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.984608   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:48.064891   79521 ssh_runner.go:195] Run: systemctl --version
	I0814 17:36:48.101403   79521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:36:48.239841   79521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:36:48.245634   79521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:36:48.245718   79521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:36:48.260517   79521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:36:48.260543   79521 start.go:495] detecting cgroup driver to use...
	I0814 17:36:48.260597   79521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:36:48.275003   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:36:48.290316   79521 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:36:48.290376   79521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:36:48.304351   79521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:36:48.320954   79521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:36:48.434176   79521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:36:48.582137   79521 docker.go:233] disabling docker service ...
	I0814 17:36:48.582217   79521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:36:48.595784   79521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:36:48.608379   79521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:36:48.735500   79521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:36:48.876194   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:36:48.891826   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:36:48.910820   79521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:36:48.910887   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.921125   79521 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:36:48.921198   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.931615   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.942779   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.953124   79521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:36:48.963454   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.974457   79521 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.991583   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:49.006059   79521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:36:49.015586   79521 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:36:49.015649   79521 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:36:49.028742   79521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:36:49.038126   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:36:49.155387   79521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:36:49.318598   79521 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:36:49.318679   79521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:36:49.323575   79521 start.go:563] Will wait 60s for crictl version
	I0814 17:36:49.323636   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:36:49.327233   79521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:36:49.369724   79521 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:36:49.369814   79521 ssh_runner.go:195] Run: crio --version
	I0814 17:36:49.399516   79521 ssh_runner.go:195] Run: crio --version
	I0814 17:36:49.431594   79521 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:36:49.432940   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:49.435776   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:49.436168   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:49.436199   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:49.436447   79521 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 17:36:49.440606   79521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:36:49.453159   79521 kubeadm.go:883] updating cluster {Name:embed-certs-309673 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:36:49.453272   79521 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:36:49.453311   79521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:36:49.486635   79521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:36:49.486708   79521 ssh_runner.go:195] Run: which lz4
	I0814 17:36:49.490626   79521 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:36:49.494822   79521 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:36:49.494852   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:36:48.000271   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Start
	I0814 17:36:48.000453   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring networks are active...
	I0814 17:36:48.001246   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring network default is active
	I0814 17:36:48.001621   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring network mk-default-k8s-diff-port-885666 is active
	I0814 17:36:48.002158   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Getting domain xml...
	I0814 17:36:48.002982   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Creating domain...
	I0814 17:36:49.272729   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting to get IP...
	I0814 17:36:49.273726   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.274182   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.274273   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.274157   80921 retry.go:31] will retry after 208.258845ms: waiting for machine to come up
	I0814 17:36:49.483781   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.484251   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.484278   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.484211   80921 retry.go:31] will retry after 318.193974ms: waiting for machine to come up
	I0814 17:36:49.803815   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.804311   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.804339   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.804277   80921 retry.go:31] will retry after 426.023242ms: waiting for machine to come up
	I0814 17:36:50.232060   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.232610   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.232646   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:50.232519   80921 retry.go:31] will retry after 534.392065ms: waiting for machine to come up
	I0814 17:36:50.745416   79521 crio.go:462] duration metric: took 1.254815826s to copy over tarball
	I0814 17:36:50.745515   79521 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:36:52.865848   79521 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.120299454s)
	I0814 17:36:52.865879   79521 crio.go:469] duration metric: took 2.120437156s to extract the tarball
	I0814 17:36:52.865887   79521 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:36:52.901808   79521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:36:52.946366   79521 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:36:52.946386   79521 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:36:52.946394   79521 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.31.0 crio true true} ...
	I0814 17:36:52.946492   79521 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-309673 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:36:52.946556   79521 ssh_runner.go:195] Run: crio config
	I0814 17:36:52.992520   79521 cni.go:84] Creating CNI manager for ""
	I0814 17:36:52.992541   79521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:36:52.992553   79521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:36:52.992577   79521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-309673 NodeName:embed-certs-309673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:36:52.992740   79521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-309673"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:36:52.992811   79521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:36:53.002460   79521 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:36:53.002539   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:36:53.011167   79521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 17:36:53.026436   79521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:36:53.041728   79521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0814 17:36:53.059102   79521 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0814 17:36:53.062728   79521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:36:53.073803   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:36:53.200870   79521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:36:53.217448   79521 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673 for IP: 192.168.61.2
	I0814 17:36:53.217472   79521 certs.go:194] generating shared ca certs ...
	I0814 17:36:53.217495   79521 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:36:53.217694   79521 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:36:53.217755   79521 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:36:53.217766   79521 certs.go:256] generating profile certs ...
	I0814 17:36:53.217876   79521 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/client.key
	I0814 17:36:53.217961   79521 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.key.83510bb8
	I0814 17:36:53.218034   79521 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.key
	I0814 17:36:53.218202   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:36:53.218248   79521 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:36:53.218272   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:36:53.218309   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:36:53.218343   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:36:53.218380   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:36:53.218447   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:36:53.219187   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:36:53.273437   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:36:53.307566   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:36:53.330107   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:36:53.360324   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0814 17:36:53.386974   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 17:36:53.409537   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:36:53.433873   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:36:53.456408   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:36:53.478233   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:36:53.500264   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:36:53.522440   79521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:36:53.538977   79521 ssh_runner.go:195] Run: openssl version
	I0814 17:36:53.544866   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:36:53.555085   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.559340   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.559399   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.565106   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:36:53.575561   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:36:53.585605   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.589838   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.589911   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.595165   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:36:53.604934   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:36:53.615153   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.619362   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.619435   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.624949   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:36:53.635459   79521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:36:53.639814   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:36:53.645419   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:36:53.651013   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:36:53.657004   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:36:53.662540   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:36:53.668187   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:36:53.673762   79521 kubeadm.go:392] StartCluster: {Name:embed-certs-309673 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:36:53.673867   79521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:36:53.673930   79521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:36:53.709404   79521 cri.go:89] found id: ""
	I0814 17:36:53.709490   79521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:36:53.719041   79521 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:36:53.719068   79521 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:36:53.719123   79521 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:36:53.728077   79521 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:36:53.729030   79521 kubeconfig.go:125] found "embed-certs-309673" server: "https://192.168.61.2:8443"
	I0814 17:36:53.730943   79521 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:36:53.739841   79521 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0814 17:36:53.739872   79521 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:36:53.739886   79521 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:36:53.739947   79521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:36:53.777400   79521 cri.go:89] found id: ""
	I0814 17:36:53.777476   79521 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:36:53.792838   79521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:36:53.802189   79521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:36:53.802223   79521 kubeadm.go:157] found existing configuration files:
	
	I0814 17:36:53.802278   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:36:53.813778   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:36:53.813854   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:36:53.825962   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:36:53.834929   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:36:53.834987   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:36:53.846315   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:36:53.855138   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:36:53.855206   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:36:53.864109   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:36:53.872613   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:36:53.872672   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:36:53.881307   79521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:36:53.890148   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.002103   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.664940   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.868608   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.932317   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:55.006430   79521 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:36:55.006523   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:50.768099   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.768599   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.768629   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:50.768554   80921 retry.go:31] will retry after 487.741283ms: waiting for machine to come up
	I0814 17:36:51.258499   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:51.259020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:51.259047   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:51.258975   80921 retry.go:31] will retry after 831.435484ms: waiting for machine to come up
	I0814 17:36:52.091900   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:52.092297   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:52.092351   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:52.092249   80921 retry.go:31] will retry after 1.067858402s: waiting for machine to come up
	I0814 17:36:53.161928   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:53.162393   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:53.162449   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:53.162366   80921 retry.go:31] will retry after 1.33971606s: waiting for machine to come up
	I0814 17:36:54.503810   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:54.504184   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:54.504214   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:54.504121   80921 retry.go:31] will retry after 1.4882184s: waiting for machine to come up
	I0814 17:36:55.506634   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:56.007367   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:56.507265   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:57.007343   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:57.026436   79521 api_server.go:72] duration metric: took 2.020005984s to wait for apiserver process to appear ...
	I0814 17:36:57.026471   79521 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:36:57.026496   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:36:55.994824   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:55.995255   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:55.995283   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:55.995206   80921 retry.go:31] will retry after 1.65461779s: waiting for machine to come up
	I0814 17:36:57.651449   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:57.651837   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:57.651867   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:57.651794   80921 retry.go:31] will retry after 2.38071296s: waiting for machine to come up
	I0814 17:37:00.033719   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:00.034261   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:37:00.034290   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:37:00.034204   80921 retry.go:31] will retry after 3.476533232s: waiting for machine to come up
	I0814 17:37:00.329636   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:00.329674   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:00.329689   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:00.357287   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:00.357334   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:00.527150   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:00.536020   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:00.536058   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:01.026558   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:01.034241   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:01.034271   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:01.526814   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:01.536226   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:01.536267   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:02.026791   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:02.031068   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 200:
	ok
	I0814 17:37:02.037240   79521 api_server.go:141] control plane version: v1.31.0
	I0814 17:37:02.037266   79521 api_server.go:131] duration metric: took 5.010786446s to wait for apiserver health ...
	I0814 17:37:02.037278   79521 cni.go:84] Creating CNI manager for ""
	I0814 17:37:02.037286   79521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:02.039248   79521 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:37:02.040543   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:37:02.050754   79521 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:37:02.067333   79521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:37:02.076082   79521 system_pods.go:59] 8 kube-system pods found
	I0814 17:37:02.076115   79521 system_pods.go:61] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:37:02.076130   79521 system_pods.go:61] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:37:02.076139   79521 system_pods.go:61] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:37:02.076178   79521 system_pods.go:61] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:37:02.076198   79521 system_pods.go:61] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:37:02.076218   79521 system_pods.go:61] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:37:02.076233   79521 system_pods.go:61] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:37:02.076242   79521 system_pods.go:61] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:37:02.076253   79521 system_pods.go:74] duration metric: took 8.901356ms to wait for pod list to return data ...
	I0814 17:37:02.076266   79521 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:37:02.080064   79521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:37:02.080087   79521 node_conditions.go:123] node cpu capacity is 2
	I0814 17:37:02.080101   79521 node_conditions.go:105] duration metric: took 3.829329ms to run NodePressure ...
	I0814 17:37:02.080121   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:02.359163   79521 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:37:02.368689   79521 kubeadm.go:739] kubelet initialised
	I0814 17:37:02.368717   79521 kubeadm.go:740] duration metric: took 9.524301ms waiting for restarted kubelet to initialise ...
	I0814 17:37:02.368728   79521 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:02.376056   79521 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.381317   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.381347   79521 pod_ready.go:81] duration metric: took 5.262062ms for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.381359   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.381370   79521 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.386799   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "etcd-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.386822   79521 pod_ready.go:81] duration metric: took 5.440585ms for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.386832   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "etcd-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.386838   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.392829   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.392853   79521 pod_ready.go:81] duration metric: took 6.003762ms for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.392864   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.392874   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.470943   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.470975   79521 pod_ready.go:81] duration metric: took 78.089715ms for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.470984   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.470996   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.870134   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-proxy-z8x9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.870163   79521 pod_ready.go:81] duration metric: took 399.157385ms for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.870175   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-proxy-z8x9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.870183   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:03.270805   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.270837   79521 pod_ready.go:81] duration metric: took 400.647029ms for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:03.270848   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.270856   79521 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:03.671023   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.671058   79521 pod_ready.go:81] duration metric: took 400.191147ms for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:03.671070   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.671079   79521 pod_ready.go:38] duration metric: took 1.302340033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:03.671098   79521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:37:03.683676   79521 ops.go:34] apiserver oom_adj: -16
	I0814 17:37:03.683701   79521 kubeadm.go:597] duration metric: took 9.964625256s to restartPrimaryControlPlane
	I0814 17:37:03.683712   79521 kubeadm.go:394] duration metric: took 10.009956133s to StartCluster
	I0814 17:37:03.683729   79521 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:03.683809   79521 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:03.685474   79521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:03.685708   79521 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:37:03.685766   79521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:37:03.685850   79521 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-309673"
	I0814 17:37:03.685862   79521 addons.go:69] Setting default-storageclass=true in profile "embed-certs-309673"
	I0814 17:37:03.685900   79521 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-309673"
	I0814 17:37:03.685907   79521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-309673"
	W0814 17:37:03.685911   79521 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:37:03.685933   79521 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:03.685933   79521 addons.go:69] Setting metrics-server=true in profile "embed-certs-309673"
	I0814 17:37:03.685988   79521 addons.go:234] Setting addon metrics-server=true in "embed-certs-309673"
	W0814 17:37:03.686006   79521 addons.go:243] addon metrics-server should already be in state true
	I0814 17:37:03.685945   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.686076   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.686284   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686362   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.686391   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686422   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.686482   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686538   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.687598   79521 out.go:177] * Verifying Kubernetes components...
	I0814 17:37:03.688995   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:03.701610   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I0814 17:37:03.702174   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.702789   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.702817   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.703223   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.703682   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.704077   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0814 17:37:03.704508   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.704864   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0814 17:37:03.705141   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.705154   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.705224   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.705473   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.705656   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.705670   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.706806   79521 addons.go:234] Setting addon default-storageclass=true in "embed-certs-309673"
	W0814 17:37:03.706824   79521 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:37:03.706851   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.707093   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.707112   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.707420   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.707536   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.707584   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.708017   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.708079   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.722383   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0814 17:37:03.722779   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.723288   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.723307   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.728799   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0814 17:37:03.728839   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38781
	I0814 17:37:03.728928   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.729426   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.729495   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.729776   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.729809   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.729951   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.729951   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.729967   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.729973   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.730360   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.730371   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.730698   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.730749   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.732979   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.733596   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.735250   79521 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:03.735262   79521 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:37:03.736576   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:37:03.736593   79521 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:37:03.736607   79521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:37:03.736612   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.736620   79521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:37:03.736637   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.740008   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740123   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740491   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.740558   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740676   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.740819   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.740842   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740872   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.740994   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.741120   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.741160   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.741527   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.741692   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.741817   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.749144   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0814 17:37:03.749482   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.749914   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.749929   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.750267   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.750467   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.752107   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.752325   79521 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:37:03.752339   79521 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:37:03.752360   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.754559   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.754845   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.754859   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.755073   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.755247   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.755402   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.755529   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.877535   79521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:03.897022   79521 node_ready.go:35] waiting up to 6m0s for node "embed-certs-309673" to be "Ready" ...
	I0814 17:37:03.951512   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:37:03.988066   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:37:03.988085   79521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:37:04.014925   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:37:04.025506   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:37:04.025531   79521 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:37:04.072457   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:37:04.072480   79521 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:37:04.104804   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:37:05.067867   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.116315804s)
	I0814 17:37:05.067888   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.052939793s)
	I0814 17:37:05.067925   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.067935   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068000   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068023   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068241   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068322   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068336   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068345   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068364   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068454   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068485   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068497   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068505   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068518   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068795   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068815   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068823   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068830   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068872   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068905   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.087716   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.087746   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.088086   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.088106   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.113388   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.008529856s)
	I0814 17:37:05.113441   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.113458   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.113736   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.113787   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.113800   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.113812   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.113823   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.114057   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.114071   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.114081   79521 addons.go:475] Verifying addon metrics-server=true in "embed-certs-309673"
	I0814 17:37:05.114163   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.116443   79521 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 17:37:05.118087   79521 addons.go:510] duration metric: took 1.432323959s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 17:37:03.512364   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:03.512842   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:37:03.512880   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:37:03.512785   80921 retry.go:31] will retry after 4.358649621s: waiting for machine to come up
	I0814 17:37:09.324026   80228 start.go:364] duration metric: took 3m22.895078586s to acquireMachinesLock for "old-k8s-version-505584"
	I0814 17:37:09.324085   80228 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:09.324101   80228 fix.go:54] fixHost starting: 
	I0814 17:37:09.324533   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:09.324575   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:09.344085   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
	I0814 17:37:09.344490   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:09.344980   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:37:09.345006   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:09.345416   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:09.345674   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:09.345842   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetState
	I0814 17:37:09.347489   80228 fix.go:112] recreateIfNeeded on old-k8s-version-505584: state=Stopped err=<nil>
	I0814 17:37:09.347511   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	W0814 17:37:09.347696   80228 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:09.349747   80228 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-505584" ...
	I0814 17:37:05.901013   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:07.901054   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:07.876377   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.876820   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has current primary IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.876845   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Found IP for machine: 192.168.50.184
	I0814 17:37:07.876857   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Reserving static IP address...
	I0814 17:37:07.877281   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-885666", mac: "52:54:00:f8:cc:3c", ip: "192.168.50.184"} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:07.877300   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Reserved static IP address: 192.168.50.184
	I0814 17:37:07.877320   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | skip adding static IP to network mk-default-k8s-diff-port-885666 - found existing host DHCP lease matching {name: "default-k8s-diff-port-885666", mac: "52:54:00:f8:cc:3c", ip: "192.168.50.184"}
	I0814 17:37:07.877339   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Getting to WaitForSSH function...
	I0814 17:37:07.877355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for SSH to be available...
	I0814 17:37:07.879843   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.880200   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:07.880242   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.880419   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Using SSH client type: external
	I0814 17:37:07.880445   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa (-rw-------)
	I0814 17:37:07.880496   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:07.880517   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | About to run SSH command:
	I0814 17:37:07.880549   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | exit 0
	I0814 17:37:08.007553   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:08.007929   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetConfigRaw
	I0814 17:37:08.009171   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:08.012358   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.012772   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.012804   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.013076   79871 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/config.json ...
	I0814 17:37:08.013284   79871 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:08.013310   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:08.013579   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.015965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.016325   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.016363   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.016491   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.016680   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.016873   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.017004   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.017140   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.017354   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.017376   79871 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:08.132369   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:08.132404   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.132657   79871 buildroot.go:166] provisioning hostname "default-k8s-diff-port-885666"
	I0814 17:37:08.132695   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.132906   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.136230   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.136669   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.136696   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.136937   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.137163   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.137350   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.137500   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.137672   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.137878   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.137900   79871 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-885666 && echo "default-k8s-diff-port-885666" | sudo tee /etc/hostname
	I0814 17:37:08.273593   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-885666
	
	I0814 17:37:08.273626   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.276470   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.276830   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.276862   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.277137   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.277382   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.277547   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.277713   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.277855   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.278052   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.278072   79871 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-885666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-885666/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-885666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:08.401522   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:08.401556   79871 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:08.401602   79871 buildroot.go:174] setting up certificates
	I0814 17:37:08.401626   79871 provision.go:84] configureAuth start
	I0814 17:37:08.401650   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.401963   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:08.404855   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.405251   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.405285   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.405521   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.407826   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.408338   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.408371   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.408515   79871 provision.go:143] copyHostCerts
	I0814 17:37:08.408583   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:08.408597   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:08.408681   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:08.408812   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:08.408823   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:08.408861   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:08.408947   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:08.408956   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:08.408984   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:08.409064   79871 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-885666 san=[127.0.0.1 192.168.50.184 default-k8s-diff-port-885666 localhost minikube]
	I0814 17:37:08.613459   79871 provision.go:177] copyRemoteCerts
	I0814 17:37:08.613530   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:08.613575   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.616704   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.617044   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.617072   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.617324   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.617515   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.617698   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.617844   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:08.705505   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:08.728835   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0814 17:37:08.751995   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:08.774577   79871 provision.go:87] duration metric: took 372.933752ms to configureAuth
	I0814 17:37:08.774609   79871 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:08.774812   79871 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:08.774880   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.777840   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.778235   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.778260   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.778527   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.778752   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.778899   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.779020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.779162   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.779437   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.779458   79871 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:09.055900   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:09.055927   79871 machine.go:97] duration metric: took 1.04262996s to provisionDockerMachine
	I0814 17:37:09.055943   79871 start.go:293] postStartSetup for "default-k8s-diff-port-885666" (driver="kvm2")
	I0814 17:37:09.055957   79871 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:09.055982   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.056325   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:09.056355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.059396   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.059853   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.059888   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.060064   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.060280   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.060558   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.060745   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.150649   79871 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:09.155263   79871 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:09.155295   79871 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:09.155400   79871 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:09.155500   79871 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:09.155623   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:09.167051   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:09.197223   79871 start.go:296] duration metric: took 141.264897ms for postStartSetup
	I0814 17:37:09.197324   79871 fix.go:56] duration metric: took 21.221265818s for fixHost
	I0814 17:37:09.197356   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.201388   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.201965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.202011   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.202109   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.202354   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.202569   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.202800   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.203010   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:09.203196   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:09.203209   79871 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:09.323868   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657029.302975780
	
	I0814 17:37:09.323892   79871 fix.go:216] guest clock: 1723657029.302975780
	I0814 17:37:09.323900   79871 fix.go:229] Guest: 2024-08-14 17:37:09.30297578 +0000 UTC Remote: 2024-08-14 17:37:09.197335302 +0000 UTC m=+253.546385360 (delta=105.640478ms)
	I0814 17:37:09.323918   79871 fix.go:200] guest clock delta is within tolerance: 105.640478ms
	I0814 17:37:09.323923   79871 start.go:83] releasing machines lock for "default-k8s-diff-port-885666", held for 21.347903434s
	I0814 17:37:09.323948   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.324209   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:09.327260   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.327802   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.327833   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.327993   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328500   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328727   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328814   79871 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:09.328862   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.328955   79871 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:09.328972   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.331813   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332081   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332233   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.332274   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332365   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.332490   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.332512   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332555   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.332669   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.332761   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.332824   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.332882   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.332926   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.333021   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.416041   79871 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:09.456024   79871 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:09.604623   79871 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:09.610562   79871 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:09.610624   79871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:09.627298   79871 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:09.627344   79871 start.go:495] detecting cgroup driver to use...
	I0814 17:37:09.627418   79871 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:09.648212   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:09.666047   79871 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:09.666107   79871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:09.681875   79871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:09.695920   79871 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:09.824502   79871 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:09.979561   79871 docker.go:233] disabling docker service ...
	I0814 17:37:09.979658   79871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:09.996877   79871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:10.014264   79871 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:10.166653   79871 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:10.288261   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:10.301868   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:10.320716   79871 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:37:10.320788   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.331099   79871 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:10.331158   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.342841   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.353762   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.364604   79871 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:10.376521   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.386787   79871 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.406713   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.418047   79871 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:10.428368   79871 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:10.428433   79871 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:10.442759   79871 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:10.452993   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:10.563097   79871 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:10.716953   79871 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:10.717031   79871 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:10.722685   79871 start.go:563] Will wait 60s for crictl version
	I0814 17:37:10.722759   79871 ssh_runner.go:195] Run: which crictl
	I0814 17:37:10.726621   79871 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:10.764534   79871 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:10.764628   79871 ssh_runner.go:195] Run: crio --version
	I0814 17:37:10.791513   79871 ssh_runner.go:195] Run: crio --version
	I0814 17:37:10.822380   79871 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:37:09.351136   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .Start
	I0814 17:37:09.351338   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring networks are active...
	I0814 17:37:09.352075   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network default is active
	I0814 17:37:09.352333   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network mk-old-k8s-version-505584 is active
	I0814 17:37:09.352701   80228 main.go:141] libmachine: (old-k8s-version-505584) Getting domain xml...
	I0814 17:37:09.353363   80228 main.go:141] libmachine: (old-k8s-version-505584) Creating domain...
	I0814 17:37:10.664390   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting to get IP...
	I0814 17:37:10.665484   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.665915   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.665980   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.665888   81116 retry.go:31] will retry after 285.047327ms: waiting for machine to come up
	I0814 17:37:10.952552   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.953009   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.953036   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.952973   81116 retry.go:31] will retry after 281.728141ms: waiting for machine to come up
	I0814 17:37:11.236576   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.237153   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.237192   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.237079   81116 retry.go:31] will retry after 341.673836ms: waiting for machine to come up
	I0814 17:37:10.401790   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:11.400713   79521 node_ready.go:49] node "embed-certs-309673" has status "Ready":"True"
	I0814 17:37:11.400742   79521 node_ready.go:38] duration metric: took 7.503686271s for node "embed-certs-309673" to be "Ready" ...
	I0814 17:37:11.400755   79521 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:11.408217   79521 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:11.414215   79521 pod_ready.go:92] pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:11.414244   79521 pod_ready.go:81] duration metric: took 5.997997ms for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:11.414256   79521 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:13.420804   79521 pod_ready.go:102] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:10.824020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:10.827965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:10.828426   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:10.828465   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:10.828807   79871 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:10.833261   79871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:10.846928   79871 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-885666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:10.847080   79871 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:37:10.847142   79871 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:10.889355   79871 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:37:10.889453   79871 ssh_runner.go:195] Run: which lz4
	I0814 17:37:10.894405   79871 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:37:10.898992   79871 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:10.899029   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:37:12.155402   79871 crio.go:462] duration metric: took 1.261016682s to copy over tarball
	I0814 17:37:12.155485   79871 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:14.344118   79871 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18859644s)
	I0814 17:37:14.344162   79871 crio.go:469] duration metric: took 2.188726026s to extract the tarball
	I0814 17:37:14.344173   79871 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:14.380317   79871 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:14.428289   79871 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:37:14.428312   79871 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:37:14.428326   79871 kubeadm.go:934] updating node { 192.168.50.184 8444 v1.31.0 crio true true} ...
	I0814 17:37:14.428422   79871 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-885666 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:14.428491   79871 ssh_runner.go:195] Run: crio config
	I0814 17:37:14.475385   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:37:14.475416   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:14.475433   79871 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:14.475464   79871 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-885666 NodeName:default-k8s-diff-port-885666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:37:14.475645   79871 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-885666"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:14.475712   79871 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:37:14.485148   79871 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:14.485206   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:14.494161   79871 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0814 17:37:14.511050   79871 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:14.526395   79871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0814 17:37:14.543061   79871 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:14.546747   79871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:14.558022   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:14.671818   79871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:14.688541   79871 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666 for IP: 192.168.50.184
	I0814 17:37:14.688583   79871 certs.go:194] generating shared ca certs ...
	I0814 17:37:14.688609   79871 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:14.688823   79871 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:14.688889   79871 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:14.688903   79871 certs.go:256] generating profile certs ...
	I0814 17:37:14.689020   79871 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/client.key
	I0814 17:37:14.689132   79871 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.key.690c84bc
	I0814 17:37:14.689182   79871 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.key
	I0814 17:37:14.689310   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:14.689367   79871 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:14.689385   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:14.689422   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:14.689453   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:14.689479   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:14.689524   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:14.690168   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:14.717906   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:14.759373   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:14.809775   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:14.834875   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 17:37:14.857860   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:14.886813   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:14.909803   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:14.935075   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:14.959759   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:14.985877   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:15.008456   79871 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:15.025602   79871 ssh_runner.go:195] Run: openssl version
	I0814 17:37:15.031392   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:15.041931   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.046475   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.046531   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.052377   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:15.063000   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:15.073463   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.078411   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.078471   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.083835   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:15.093753   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:15.103876   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.108487   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.108559   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.114104   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:15.124285   79871 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:15.128515   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:15.134223   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:15.139700   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:15.145537   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:15.151287   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:15.156766   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:15.162149   79871 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-885666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:15.162256   79871 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:15.162314   79871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:15.198745   79871 cri.go:89] found id: ""
	I0814 17:37:15.198814   79871 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:15.212198   79871 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:15.212216   79871 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:15.212256   79871 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:15.224275   79871 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:15.225218   79871 kubeconfig.go:125] found "default-k8s-diff-port-885666" server: "https://192.168.50.184:8444"
	I0814 17:37:15.227291   79871 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:15.237448   79871 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.184
	I0814 17:37:15.237494   79871 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:15.237509   79871 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:15.237563   79871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:15.281593   79871 cri.go:89] found id: ""
	I0814 17:37:15.281662   79871 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:15.298596   79871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:15.308702   79871 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:15.308723   79871 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:15.308779   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 17:37:15.318348   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:15.318409   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:15.330049   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 17:37:15.341283   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:15.341373   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:15.350584   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 17:37:15.361658   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:15.361718   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:15.373526   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 17:37:15.382360   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:15.382432   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:15.392477   79871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:15.402387   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:15.528954   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:11.580887   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.581466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.581500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.581392   81116 retry.go:31] will retry after 514.448726ms: waiting for machine to come up
	I0814 17:37:12.098137   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.098652   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.098740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.098642   81116 retry.go:31] will retry after 649.302617ms: waiting for machine to come up
	I0814 17:37:12.749349   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.749777   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.749803   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.749736   81116 retry.go:31] will retry after 897.486278ms: waiting for machine to come up
	I0814 17:37:13.649145   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:13.649666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:13.649698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:13.649621   81116 retry.go:31] will retry after 1.017213079s: waiting for machine to come up
	I0814 17:37:14.669187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:14.669715   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:14.669740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:14.669679   81116 retry.go:31] will retry after 1.014709613s: waiting for machine to come up
	I0814 17:37:15.685748   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:15.686269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:15.686299   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:15.686217   81116 retry.go:31] will retry after 1.476940798s: waiting for machine to come up
	I0814 17:37:15.422067   79521 pod_ready.go:102] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:16.421689   79521 pod_ready.go:92] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.421715   79521 pod_ready.go:81] duration metric: took 5.007451471s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.421724   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.426620   79521 pod_ready.go:92] pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.426644   79521 pod_ready.go:81] duration metric: took 4.912475ms for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.426657   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.430754   79521 pod_ready.go:92] pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.430776   79521 pod_ready.go:81] duration metric: took 4.110475ms for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.430787   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.434469   79521 pod_ready.go:92] pod "kube-proxy-z8x9t" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.434487   79521 pod_ready.go:81] duration metric: took 3.693253ms for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.434498   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.438294   79521 pod_ready.go:92] pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.438314   79521 pod_ready.go:81] duration metric: took 3.80298ms for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.438346   79521 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:18.445838   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:16.453075   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.676680   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.741803   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.831091   79871 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:16.831186   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:17.332193   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:17.831346   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:18.331620   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:18.832011   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:19.331528   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:19.348083   79871 api_server.go:72] duration metric: took 2.516990388s to wait for apiserver process to appear ...
	I0814 17:37:19.348119   79871 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:37:19.348144   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:17.164541   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:17.165093   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:17.165122   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:17.165017   81116 retry.go:31] will retry after 1.644726601s: waiting for machine to come up
	I0814 17:37:18.811628   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:18.812199   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:18.812224   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:18.812132   81116 retry.go:31] will retry after 2.740531885s: waiting for machine to come up
	I0814 17:37:21.576628   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:21.576657   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:21.576672   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:21.601355   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:21.601389   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:21.848481   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:21.855499   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:21.855530   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:22.349158   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:22.353345   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:22.353368   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:22.848954   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:22.853912   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 200:
	ok
	I0814 17:37:22.865096   79871 api_server.go:141] control plane version: v1.31.0
	I0814 17:37:22.865127   79871 api_server.go:131] duration metric: took 3.516999004s to wait for apiserver health ...
	I0814 17:37:22.865138   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:37:22.865146   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:22.866812   79871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:37:20.446123   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:22.446518   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:24.945729   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:22.867939   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:37:22.881586   79871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:37:22.899815   79871 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:37:22.910873   79871 system_pods.go:59] 8 kube-system pods found
	I0814 17:37:22.910928   79871 system_pods.go:61] "coredns-6f6b679f8f-mxc9v" [d1b9d422-faff-4709-a375-f8783e75e18c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:37:22.910946   79871 system_pods.go:61] "etcd-default-k8s-diff-port-885666" [a5473465-a1c1-4413-8e77-74fb1eb398a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:37:22.910956   79871 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-885666" [06c53e48-b156-42b1-b381-818f75821196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:37:22.910966   79871 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-885666" [18a2d7fb-4e18-4880-8812-63d25934699b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:37:22.910977   79871 system_pods.go:61] "kube-proxy-4rrff" [14453cc8-da7d-4dd4-b7fa-89a26dbbf23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:37:22.910995   79871 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-885666" [f0455f16-9a3e-4ede-8524-f701b1ab4ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:37:22.911005   79871 system_pods.go:61] "metrics-server-6867b74b74-qtzm8" [04c797ec-2e38-42a7-a023-5f60c451f780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:37:22.911020   79871 system_pods.go:61] "storage-provisioner" [88c2e8f0-0706-494a-8e83-0ede8f129040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:37:22.911032   79871 system_pods.go:74] duration metric: took 11.192968ms to wait for pod list to return data ...
	I0814 17:37:22.911044   79871 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:37:22.915096   79871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:37:22.915128   79871 node_conditions.go:123] node cpu capacity is 2
	I0814 17:37:22.915140   79871 node_conditions.go:105] duration metric: took 4.087198ms to run NodePressure ...
	I0814 17:37:22.915165   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:23.204612   79871 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:37:23.209643   79871 kubeadm.go:739] kubelet initialised
	I0814 17:37:23.209665   79871 kubeadm.go:740] duration metric: took 5.023123ms waiting for restarted kubelet to initialise ...
	I0814 17:37:23.209673   79871 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:23.215957   79871 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.221969   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.221993   79871 pod_ready.go:81] duration metric: took 6.011053ms for pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.222008   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.222014   79871 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.227119   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.227147   79871 pod_ready.go:81] duration metric: took 5.125006ms for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.227157   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.227163   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.231297   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.231321   79871 pod_ready.go:81] duration metric: took 4.149023ms for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.231346   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.231355   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:25.239956   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:21.555057   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:21.555530   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:21.555562   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:21.555484   81116 retry.go:31] will retry after 3.159225533s: waiting for machine to come up
	I0814 17:37:24.716173   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:24.716482   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:24.716507   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:24.716451   81116 retry.go:31] will retry after 3.32732131s: waiting for machine to come up
	I0814 17:37:29.512066   79367 start.go:364] duration metric: took 55.26941078s to acquireMachinesLock for "no-preload-545149"
	I0814 17:37:29.512115   79367 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:29.512123   79367 fix.go:54] fixHost starting: 
	I0814 17:37:29.512539   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:29.512574   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:29.529625   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0814 17:37:29.530074   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:29.530558   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:37:29.530585   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:29.530930   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:29.531149   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:29.531291   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:37:29.532999   79367 fix.go:112] recreateIfNeeded on no-preload-545149: state=Stopped err=<nil>
	I0814 17:37:29.533049   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	W0814 17:37:29.533224   79367 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:29.535000   79367 out.go:177] * Restarting existing kvm2 VM for "no-preload-545149" ...
	I0814 17:37:27.445398   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:29.945246   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:27.737698   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:29.737890   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:28.045690   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046151   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has current primary IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046177   80228 main.go:141] libmachine: (old-k8s-version-505584) Found IP for machine: 192.168.72.49
	I0814 17:37:28.046192   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserving static IP address...
	I0814 17:37:28.046500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.046524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | skip adding static IP to network mk-old-k8s-version-505584 - found existing host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"}
	I0814 17:37:28.046540   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserved static IP address: 192.168.72.49
	I0814 17:37:28.046559   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting for SSH to be available...
	I0814 17:37:28.046571   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Getting to WaitForSSH function...
	I0814 17:37:28.048709   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049082   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.049106   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049252   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH client type: external
	I0814 17:37:28.049285   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa (-rw-------)
	I0814 17:37:28.049325   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:28.049342   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | About to run SSH command:
	I0814 17:37:28.049356   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | exit 0
	I0814 17:37:28.179844   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:28.180193   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetConfigRaw
	I0814 17:37:28.180865   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.183617   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184074   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.184118   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184367   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:37:28.184641   80228 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:28.184663   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:28.184891   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.187158   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187517   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.187547   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187696   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.187857   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188027   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188178   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.188320   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.188570   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.188587   80228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:28.303564   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:28.303597   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.303831   80228 buildroot.go:166] provisioning hostname "old-k8s-version-505584"
	I0814 17:37:28.303856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.304021   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.306826   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307180   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.307210   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307415   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.307608   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307769   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307915   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.308131   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.308336   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.308354   80228 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-505584 && echo "old-k8s-version-505584" | sudo tee /etc/hostname
	I0814 17:37:28.434224   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-505584
	
	I0814 17:37:28.434261   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.437350   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437633   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.437666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.438077   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438245   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438395   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.438623   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.438832   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.438857   80228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-505584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-505584/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-505584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:28.564784   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:28.564815   80228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:28.564858   80228 buildroot.go:174] setting up certificates
	I0814 17:37:28.564872   80228 provision.go:84] configureAuth start
	I0814 17:37:28.564884   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.565188   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.568217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.568698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.568731   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.569013   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.571364   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571780   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.571805   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571961   80228 provision.go:143] copyHostCerts
	I0814 17:37:28.572023   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:28.572032   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:28.572076   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:28.572176   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:28.572184   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:28.572206   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:28.572275   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:28.572284   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:28.572337   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:28.572435   80228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-505584 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-505584]
	I0814 17:37:28.804798   80228 provision.go:177] copyRemoteCerts
	I0814 17:37:28.804853   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:28.804879   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.807967   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.808302   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808458   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.808690   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.808874   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.809001   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:28.900346   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:28.926959   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 17:37:28.955373   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:28.984436   80228 provision.go:87] duration metric: took 419.552519ms to configureAuth
	I0814 17:37:28.984463   80228 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:28.984630   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:37:28.984713   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.987602   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988077   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.988107   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988237   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.988486   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988641   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988768   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.988986   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.989209   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.989234   80228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:29.262630   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:29.262656   80228 machine.go:97] duration metric: took 1.078000469s to provisionDockerMachine
	I0814 17:37:29.262669   80228 start.go:293] postStartSetup for "old-k8s-version-505584" (driver="kvm2")
	I0814 17:37:29.262683   80228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:29.262704   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.263051   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:29.263082   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.266020   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.266495   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266720   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.266919   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.267093   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.267253   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.354027   80228 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:29.358196   80228 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:29.358224   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:29.358304   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:29.358416   80228 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:29.358543   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:29.367802   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:29.392802   80228 start.go:296] duration metric: took 130.117007ms for postStartSetup
	I0814 17:37:29.392846   80228 fix.go:56] duration metric: took 20.068754346s for fixHost
	I0814 17:37:29.392871   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.395638   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396032   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.396064   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396251   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.396516   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396698   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396893   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.397155   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:29.397326   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:29.397340   80228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:29.511889   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657049.468340520
	
	I0814 17:37:29.511913   80228 fix.go:216] guest clock: 1723657049.468340520
	I0814 17:37:29.511923   80228 fix.go:229] Guest: 2024-08-14 17:37:29.46834052 +0000 UTC Remote: 2024-08-14 17:37:29.392851248 +0000 UTC m=+223.104093144 (delta=75.489272ms)
	I0814 17:37:29.511983   80228 fix.go:200] guest clock delta is within tolerance: 75.489272ms
	I0814 17:37:29.511996   80228 start.go:83] releasing machines lock for "old-k8s-version-505584", held for 20.187937886s
	I0814 17:37:29.512031   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.512333   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:29.515152   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515487   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.515524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515735   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516299   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516497   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516643   80228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:29.516723   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.516727   80228 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:29.516752   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.519600   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.519751   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520017   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520045   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520164   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520192   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520341   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520423   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520520   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520588   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520646   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520718   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.520780   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.642824   80228 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:29.648834   80228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:29.795482   80228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:29.801407   80228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:29.801486   80228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:29.821662   80228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:29.821684   80228 start.go:495] detecting cgroup driver to use...
	I0814 17:37:29.821761   80228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:29.843829   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:29.859505   80228 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:29.859590   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:29.873790   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:29.889295   80228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:30.035909   80228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:30.209521   80228 docker.go:233] disabling docker service ...
	I0814 17:37:30.209574   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:30.226980   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:30.241678   80228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:30.375116   80228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:30.498357   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:30.512272   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:30.533062   80228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 17:37:30.533130   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.543595   80228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:30.543664   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.554139   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.564417   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.574627   80228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:30.584957   80228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:30.594667   80228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:30.594720   80228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:30.606826   80228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:30.621990   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:30.758992   80228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:30.915494   80228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:30.915572   80228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:30.920692   80228 start.go:563] Will wait 60s for crictl version
	I0814 17:37:30.920767   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:30.924365   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:30.964662   80228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:30.964756   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:30.995534   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:31.025400   80228 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 17:37:31.026943   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:31.030217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030630   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:31.030665   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030943   80228 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:31.034960   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:31.047742   80228 kubeadm.go:883] updating cluster {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:31.047864   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:37:31.047926   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:31.092203   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:31.092278   80228 ssh_runner.go:195] Run: which lz4
	I0814 17:37:31.096471   80228 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:37:31.100610   80228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:31.100642   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 17:37:29.536310   79367 main.go:141] libmachine: (no-preload-545149) Calling .Start
	I0814 17:37:29.536513   79367 main.go:141] libmachine: (no-preload-545149) Ensuring networks are active...
	I0814 17:37:29.537431   79367 main.go:141] libmachine: (no-preload-545149) Ensuring network default is active
	I0814 17:37:29.537935   79367 main.go:141] libmachine: (no-preload-545149) Ensuring network mk-no-preload-545149 is active
	I0814 17:37:29.538468   79367 main.go:141] libmachine: (no-preload-545149) Getting domain xml...
	I0814 17:37:29.539383   79367 main.go:141] libmachine: (no-preload-545149) Creating domain...
	I0814 17:37:30.863155   79367 main.go:141] libmachine: (no-preload-545149) Waiting to get IP...
	I0814 17:37:30.864257   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:30.864722   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:30.864812   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:30.864695   81248 retry.go:31] will retry after 244.342973ms: waiting for machine to come up
	I0814 17:37:31.111211   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.111784   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.111806   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.111735   81248 retry.go:31] will retry after 277.033145ms: waiting for machine to come up
	I0814 17:37:31.390071   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.390511   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.390554   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.390429   81248 retry.go:31] will retry after 320.225451ms: waiting for machine to come up
	I0814 17:37:31.949069   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:34.445833   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:31.741110   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:33.239418   79871 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:33.239449   79871 pod_ready.go:81] duration metric: took 10.008084028s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.239462   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4rrff" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.244600   79871 pod_ready.go:92] pod "kube-proxy-4rrff" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:33.244628   79871 pod_ready.go:81] duration metric: took 5.157296ms for pod "kube-proxy-4rrff" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.244648   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:35.253613   79871 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:35.253643   79871 pod_ready.go:81] duration metric: took 2.008985731s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:35.253657   79871 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:32.582064   80228 crio.go:462] duration metric: took 1.485645107s to copy over tarball
	I0814 17:37:32.582151   80228 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:35.556765   80228 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.974581109s)
	I0814 17:37:35.556795   80228 crio.go:469] duration metric: took 2.9747s to extract the tarball
	I0814 17:37:35.556802   80228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:35.599129   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:35.632752   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:35.632775   80228 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:35.632831   80228 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.632864   80228 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.632892   80228 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 17:37:35.632911   80228 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.632944   80228 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.633112   80228 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634793   80228 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.634821   80228 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 17:37:35.634824   80228 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634885   80228 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.634910   80228 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.635009   80228 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.635082   80228 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.635265   80228 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.905566   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 17:37:35.953168   80228 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 17:37:35.953210   80228 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 17:37:35.953260   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:35.957961   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:35.978859   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.978920   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.988556   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.993281   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.997933   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.018501   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.043527   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.146739   80228 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 17:37:36.146812   80228 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 17:37:36.146832   80228 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.146852   80228 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.146881   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.146891   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163832   80228 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 17:37:36.163856   80228 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 17:37:36.163877   80228 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.163889   80228 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.163923   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163924   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163927   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.172482   80228 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 17:37:36.172530   80228 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.172599   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.195157   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.195208   80228 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 17:37:36.195165   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.195242   80228 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.195245   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.195277   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.237454   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 17:37:36.237519   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.237549   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.292614   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.306771   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.306794   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:31.712067   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.712601   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.712630   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.712575   81248 retry.go:31] will retry after 546.687472ms: waiting for machine to come up
	I0814 17:37:32.261457   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:32.261921   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:32.261950   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:32.261854   81248 retry.go:31] will retry after 484.345236ms: waiting for machine to come up
	I0814 17:37:32.747475   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:32.748118   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:32.748149   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:32.748060   81248 retry.go:31] will retry after 899.564198ms: waiting for machine to come up
	I0814 17:37:33.649684   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:33.650206   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:33.650234   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:33.650155   81248 retry.go:31] will retry after 1.039934932s: waiting for machine to come up
	I0814 17:37:34.691741   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:34.692197   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:34.692220   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:34.692169   81248 retry.go:31] will retry after 925.402437ms: waiting for machine to come up
	I0814 17:37:35.618737   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:35.619169   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:35.619200   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:35.619102   81248 retry.go:31] will retry after 1.401066913s: waiting for machine to come up
	I0814 17:37:36.447039   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:38.945321   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:37.260912   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:39.759967   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:36.321893   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.339836   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.339929   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.426588   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.426659   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.433149   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.469717   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:36.477512   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.477583   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.477761   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.538635   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 17:37:36.557712   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 17:37:36.558304   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.700263   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 17:37:36.700333   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 17:37:36.700410   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 17:37:36.700481   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 17:37:36.700527   80228 cache_images.go:92] duration metric: took 1.067740607s to LoadCachedImages
	W0814 17:37:36.700602   80228 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 17:37:36.700623   80228 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0814 17:37:36.700757   80228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-505584 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:36.700846   80228 ssh_runner.go:195] Run: crio config
	I0814 17:37:36.748814   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:37:36.748843   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:36.748860   80228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:36.748885   80228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-505584 NodeName:old-k8s-version-505584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 17:37:36.749053   80228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-505584"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:36.749129   80228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 17:37:36.760058   80228 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:36.760131   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:36.769388   80228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0814 17:37:36.786594   80228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:36.807695   80228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0814 17:37:36.825609   80228 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:36.829296   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:36.841882   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:36.976199   80228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:36.993682   80228 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584 for IP: 192.168.72.49
	I0814 17:37:36.993707   80228 certs.go:194] generating shared ca certs ...
	I0814 17:37:36.993728   80228 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:36.993924   80228 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:36.993985   80228 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:36.993998   80228 certs.go:256] generating profile certs ...
	I0814 17:37:36.994115   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.key
	I0814 17:37:36.994206   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key.c375770f
	I0814 17:37:36.994261   80228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key
	I0814 17:37:36.994428   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:36.994478   80228 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:36.994492   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:36.994522   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:36.994557   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:36.994603   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:36.994661   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:36.995558   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:37.043910   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:37.073810   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:37.097939   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:37.124449   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 17:37:37.154747   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:37.179474   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:37.204471   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:37.228579   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:37.266929   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:37.292912   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:37.316803   80228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:37.332934   80228 ssh_runner.go:195] Run: openssl version
	I0814 17:37:37.339316   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:37.349829   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354230   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354297   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.360089   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:37.371417   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:37.381777   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385894   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385955   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.391826   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:37.402049   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:37.412038   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416395   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416448   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.421794   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:37.431868   80228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:37.436305   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:37.442838   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:37.448991   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:37.454769   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:37.460381   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:37.466406   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:37.472466   80228 kubeadm.go:392] StartCluster: {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:37.472584   80228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:37.472636   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.508256   80228 cri.go:89] found id: ""
	I0814 17:37:37.508323   80228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:37.518824   80228 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:37.518856   80228 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:37.518941   80228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:37.529328   80228 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:37.530242   80228 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-505584" does not appear in /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:37.530890   80228 kubeconfig.go:62] /home/jenkins/minikube-integration/19446-13977/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-505584" cluster setting kubeconfig missing "old-k8s-version-505584" context setting]
	I0814 17:37:37.531922   80228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:37.539843   80228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:37.550012   80228 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.49
	I0814 17:37:37.550051   80228 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:37.550063   80228 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:37.550113   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.590226   80228 cri.go:89] found id: ""
	I0814 17:37:37.590307   80228 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:37.606242   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:37.615340   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:37.615377   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:37.615436   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:37:37.623996   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:37.624063   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:37.633356   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:37:37.642888   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:37.642958   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:37.652532   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.661607   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:37.661679   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.670876   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:37:37.679780   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:37.679846   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:37.690044   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:37.699617   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:37.813799   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.666487   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.901307   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.029983   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.139056   80228 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:39.139133   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:39.639191   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.139315   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.639292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:41.139421   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:37.021766   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:37.022253   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:37.022282   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:37.022216   81248 retry.go:31] will retry after 2.184222941s: waiting for machine to come up
	I0814 17:37:39.209777   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:39.210239   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:39.210265   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:39.210203   81248 retry.go:31] will retry after 2.903962154s: waiting for machine to come up
	I0814 17:37:41.445413   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:43.949816   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:41.760985   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:44.260273   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:41.639312   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.139387   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.639981   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.139499   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.639391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.139425   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.639677   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.139466   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.639426   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:46.140065   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.116682   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:42.117116   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:42.117154   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:42.117086   81248 retry.go:31] will retry after 3.387467992s: waiting for machine to come up
	I0814 17:37:45.505680   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:45.506034   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:45.506056   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:45.505986   81248 retry.go:31] will retry after 2.944973353s: waiting for machine to come up
	I0814 17:37:46.443772   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:48.445046   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:46.759297   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:49.260881   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:46.640043   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.139213   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.639848   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.140080   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.639961   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.139473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.639212   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.139781   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.640028   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.140140   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.452516   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.453064   79367 main.go:141] libmachine: (no-preload-545149) Found IP for machine: 192.168.39.162
	I0814 17:37:48.453092   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has current primary IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.453099   79367 main.go:141] libmachine: (no-preload-545149) Reserving static IP address...
	I0814 17:37:48.453513   79367 main.go:141] libmachine: (no-preload-545149) Reserved static IP address: 192.168.39.162
	I0814 17:37:48.453564   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "no-preload-545149", mac: "52:54:00:d0:bd:d7", ip: "192.168.39.162"} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.453578   79367 main.go:141] libmachine: (no-preload-545149) Waiting for SSH to be available...
	I0814 17:37:48.453608   79367 main.go:141] libmachine: (no-preload-545149) DBG | skip adding static IP to network mk-no-preload-545149 - found existing host DHCP lease matching {name: "no-preload-545149", mac: "52:54:00:d0:bd:d7", ip: "192.168.39.162"}
	I0814 17:37:48.453630   79367 main.go:141] libmachine: (no-preload-545149) DBG | Getting to WaitForSSH function...
	I0814 17:37:48.455959   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.456279   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.456304   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.456429   79367 main.go:141] libmachine: (no-preload-545149) DBG | Using SSH client type: external
	I0814 17:37:48.456449   79367 main.go:141] libmachine: (no-preload-545149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa (-rw-------)
	I0814 17:37:48.456490   79367 main.go:141] libmachine: (no-preload-545149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:48.456506   79367 main.go:141] libmachine: (no-preload-545149) DBG | About to run SSH command:
	I0814 17:37:48.456520   79367 main.go:141] libmachine: (no-preload-545149) DBG | exit 0
	I0814 17:37:48.579489   79367 main.go:141] libmachine: (no-preload-545149) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:48.579924   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetConfigRaw
	I0814 17:37:48.580615   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:48.583202   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.583545   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.583592   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.583857   79367 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/config.json ...
	I0814 17:37:48.584093   79367 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:48.584113   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:48.584340   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.586712   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.587086   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.587107   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.587259   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.587441   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.587593   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.587720   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.587869   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.588029   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.588040   79367 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:48.691255   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:48.691290   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.691555   79367 buildroot.go:166] provisioning hostname "no-preload-545149"
	I0814 17:37:48.691593   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.691798   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.694492   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.694768   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.694797   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.694907   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.695084   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.695248   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.695396   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.695556   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.695777   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.695798   79367 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-545149 && echo "no-preload-545149" | sudo tee /etc/hostname
	I0814 17:37:48.813509   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-545149
	
	I0814 17:37:48.813537   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.816304   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.816698   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.816732   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.816884   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.817057   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.817265   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.817393   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.817586   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.817809   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.817836   79367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-545149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-545149/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-545149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:48.927482   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:48.927512   79367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:48.927540   79367 buildroot.go:174] setting up certificates
	I0814 17:37:48.927551   79367 provision.go:84] configureAuth start
	I0814 17:37:48.927567   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.927831   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:48.930532   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.930879   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.930906   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.931104   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.933420   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.933754   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.933783   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.933893   79367 provision.go:143] copyHostCerts
	I0814 17:37:48.933968   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:48.933979   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:48.934040   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:48.934146   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:48.934156   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:48.934186   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:48.934262   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:48.934271   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:48.934302   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:48.934377   79367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.no-preload-545149 san=[127.0.0.1 192.168.39.162 localhost minikube no-preload-545149]
	I0814 17:37:49.287517   79367 provision.go:177] copyRemoteCerts
	I0814 17:37:49.287580   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:49.287607   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.290280   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.290667   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.290690   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.290856   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.291063   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.291180   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.291304   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.374716   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:49.398652   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:37:49.422885   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:37:49.448774   79367 provision.go:87] duration metric: took 521.207251ms to configureAuth
	I0814 17:37:49.448800   79367 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:49.448972   79367 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:49.449064   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.452034   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.452373   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.452403   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.452604   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.452859   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.453058   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.453217   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.453388   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:49.453579   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:49.453601   79367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:49.711896   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:49.711922   79367 machine.go:97] duration metric: took 1.127817152s to provisionDockerMachine
	I0814 17:37:49.711933   79367 start.go:293] postStartSetup for "no-preload-545149" (driver="kvm2")
	I0814 17:37:49.711942   79367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:49.711977   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.712299   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:49.712324   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.714736   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.715059   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.715097   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.715232   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.715428   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.715616   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.715769   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.797746   79367 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:49.801764   79367 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:49.801794   79367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:49.801863   79367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:49.801960   79367 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:49.802081   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:49.811506   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:49.834762   79367 start.go:296] duration metric: took 122.81358ms for postStartSetup
	I0814 17:37:49.834812   79367 fix.go:56] duration metric: took 20.32268926s for fixHost
	I0814 17:37:49.834837   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.837418   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.837739   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.837768   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.837903   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.838114   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.838292   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.838438   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.838643   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:49.838838   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:49.838850   79367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:49.944936   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657069.919883473
	
	I0814 17:37:49.944965   79367 fix.go:216] guest clock: 1723657069.919883473
	I0814 17:37:49.944975   79367 fix.go:229] Guest: 2024-08-14 17:37:49.919883473 +0000 UTC Remote: 2024-08-14 17:37:49.834818813 +0000 UTC m=+358.184638535 (delta=85.06466ms)
	I0814 17:37:49.945005   79367 fix.go:200] guest clock delta is within tolerance: 85.06466ms
	I0814 17:37:49.945017   79367 start.go:83] releasing machines lock for "no-preload-545149", held for 20.432923283s
	I0814 17:37:49.945044   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.945291   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:49.947847   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.948269   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.948295   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.948500   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949082   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949262   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949347   79367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:49.949406   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.949517   79367 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:49.949541   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.952281   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952328   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952667   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.952692   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952833   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.952836   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.952895   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.953037   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.953075   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.953201   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.953212   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.953400   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.953412   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.953543   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:50.072094   79367 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:50.080210   79367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:50.227736   79367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:50.233533   79367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:50.233597   79367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:50.249452   79367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:50.249474   79367 start.go:495] detecting cgroup driver to use...
	I0814 17:37:50.249552   79367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:50.265740   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:50.278769   79367 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:50.278833   79367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:50.291625   79367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:50.304529   79367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:50.415405   79367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:50.556016   79367 docker.go:233] disabling docker service ...
	I0814 17:37:50.556092   79367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:50.570197   79367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:50.583068   79367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:50.721414   79367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:50.850890   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:50.864530   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:50.882021   79367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:37:50.882097   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.891490   79367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:50.891564   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.901437   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.911316   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.920935   79367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:50.930571   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.940106   79367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.957351   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.967222   79367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:50.976120   79367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:50.976170   79367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:50.990922   79367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:51.000086   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:51.116655   79367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:51.246182   79367 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:51.246265   79367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:51.250838   79367 start.go:563] Will wait 60s for crictl version
	I0814 17:37:51.250900   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.254633   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:51.299890   79367 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:51.299992   79367 ssh_runner.go:195] Run: crio --version
	I0814 17:37:51.328292   79367 ssh_runner.go:195] Run: crio --version
	I0814 17:37:51.360415   79367 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:37:51.361536   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:51.364443   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:51.364884   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:51.364914   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:51.365112   79367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:51.368941   79367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:51.380519   79367 kubeadm.go:883] updating cluster {Name:no-preload-545149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:51.380668   79367 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:37:51.380705   79367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:51.413314   79367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:37:51.413346   79367 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:51.413417   79367 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.413435   79367 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.413452   79367 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.413395   79367 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:51.413473   79367 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 17:37:51.413440   79367 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.413521   79367 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.413529   79367 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.414920   79367 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:51.414940   79367 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 17:37:51.414983   79367 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.415006   79367 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.415010   79367 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.414982   79367 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.415070   79367 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.415100   79367 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.664642   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.686463   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:50.445457   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:52.945915   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:51.762809   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:54.259593   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:51.639969   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.139918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.639403   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.139921   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.640224   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.140272   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.639242   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.139908   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.639233   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:56.139955   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.699627   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0814 17:37:51.718031   79367 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0814 17:37:51.718085   79367 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.718133   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.736370   79367 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0814 17:37:51.736408   79367 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.736454   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.779229   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.800986   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.819343   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.841240   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.853614   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.853650   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.853753   79367 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0814 17:37:51.853798   79367 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.853842   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.866717   79367 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0814 17:37:51.866757   79367 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.866807   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.908593   79367 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0814 17:37:51.908644   79367 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.908701   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.936701   79367 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0814 17:37:51.936737   79367 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.936784   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.944882   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.944962   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.944983   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.945008   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.945070   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.945089   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.063281   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:52.080543   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:52.080556   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.080574   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:52.080629   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:52.080647   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:52.126573   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:52.205600   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:52.205623   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.236617   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 17:37:52.236678   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:52.236757   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.237083   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 17:37:52.237161   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:52.238804   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0814 17:37:52.238891   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:52.294945   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 17:37:52.295018   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 17:37:52.295064   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:37:52.295103   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0814 17:37:52.295127   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.295189   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.295110   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:37:52.302365   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 17:37:52.302388   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0814 17:37:52.302423   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0814 17:37:52.302472   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:37:52.306933   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0814 17:37:52.307107   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0814 17:37:52.309298   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:54.271998   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.976780716s)
	I0814 17:37:54.272032   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0814 17:37:54.272053   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:54.272063   79367 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.962736886s)
	I0814 17:37:54.272100   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:54.271998   79367 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (1.969503874s)
	I0814 17:37:54.272150   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0814 17:37:54.272105   79367 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0814 17:37:54.272192   79367 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:54.272250   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:56.021236   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.749108117s)
	I0814 17:37:56.021281   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0814 17:37:56.021288   79367 ssh_runner.go:235] Completed: which crictl: (1.749013682s)
	I0814 17:37:56.021309   79367 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:56.021370   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:56.021386   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:55.445017   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:57.445204   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:59.945329   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:56.260666   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:58.760907   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:56.639799   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.140184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.639918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.139310   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.639393   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.140139   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.639614   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.139472   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.139946   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.830150   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.808753337s)
	I0814 17:37:59.830181   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0814 17:37:59.830205   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:37:59.830208   79367 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.80880721s)
	I0814 17:37:59.830253   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:59.830255   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:38:02.444320   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:04.444667   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:01.260951   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:03.759895   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:01.639422   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.139858   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.639412   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.140047   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.640170   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.139779   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.639728   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.139343   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.640249   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.139448   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.796675   79367 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.966400982s)
	I0814 17:38:01.796690   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.966414051s)
	I0814 17:38:01.796708   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0814 17:38:01.796735   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:38:01.796757   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:38:01.796796   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:38:01.841898   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 17:38:01.841994   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:03.571965   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.775142217s)
	I0814 17:38:03.571991   79367 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.729967853s)
	I0814 17:38:03.572002   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0814 17:38:03.572019   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0814 17:38:03.572028   79367 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:03.572079   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:04.422670   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 17:38:04.422705   79367 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:38:04.422760   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:38:06.277419   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.854632861s)
	I0814 17:38:06.277457   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0814 17:38:06.277488   79367 cache_images.go:123] Successfully loaded all cached images
	I0814 17:38:06.277494   79367 cache_images.go:92] duration metric: took 14.864134758s to LoadCachedImages
	I0814 17:38:06.277504   79367 kubeadm.go:934] updating node { 192.168.39.162 8443 v1.31.0 crio true true} ...
	I0814 17:38:06.277628   79367 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-545149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:38:06.277690   79367 ssh_runner.go:195] Run: crio config
	I0814 17:38:06.337971   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:38:06.337990   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:38:06.337999   79367 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:38:06.338019   79367 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-545149 NodeName:no-preload-545149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:38:06.338148   79367 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-545149"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:38:06.338222   79367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:38:06.348156   79367 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:38:06.348219   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:38:06.356784   79367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0814 17:38:06.372439   79367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:38:06.388610   79367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0814 17:38:06.405084   79367 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I0814 17:38:06.408753   79367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:38:06.420313   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:38:06.546115   79367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:38:06.563747   79367 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149 for IP: 192.168.39.162
	I0814 17:38:06.563776   79367 certs.go:194] generating shared ca certs ...
	I0814 17:38:06.563799   79367 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:38:06.563973   79367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:38:06.564035   79367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:38:06.564058   79367 certs.go:256] generating profile certs ...
	I0814 17:38:06.564150   79367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/client.key
	I0814 17:38:06.564207   79367 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.key.d0704694
	I0814 17:38:06.564241   79367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.key
	I0814 17:38:06.564349   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:38:06.564377   79367 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:38:06.564386   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:38:06.564411   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:38:06.564437   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:38:06.564459   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:38:06.564497   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:38:06.565269   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:38:06.592622   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:38:06.619148   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:38:06.646169   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:38:06.682399   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 17:38:06.446354   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:08.948005   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:05.760991   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:08.260189   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:10.260816   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:06.639416   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.140176   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.639682   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.140063   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.640014   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.139435   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.639256   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.139949   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.640283   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:11.139394   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.714195   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:38:06.750431   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:38:06.772702   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:38:06.793932   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:38:06.815601   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:38:06.837187   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:38:06.858175   79367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:38:06.876187   79367 ssh_runner.go:195] Run: openssl version
	I0814 17:38:06.881909   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:38:06.892057   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.896191   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.896251   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.901630   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:38:06.910888   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:38:06.920223   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.924480   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.924527   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.929591   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:38:06.939072   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:38:06.949970   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.954288   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.954339   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.959551   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:38:06.969505   79367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:38:06.973905   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:38:06.980211   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:38:06.986779   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:38:06.992220   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:38:06.997446   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:38:07.002681   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:38:07.008037   79367 kubeadm.go:392] StartCluster: {Name:no-preload-545149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:38:07.008131   79367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:38:07.008188   79367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:38:07.043144   79367 cri.go:89] found id: ""
	I0814 17:38:07.043214   79367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:38:07.052215   79367 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:38:07.052233   79367 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:38:07.052281   79367 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:38:07.060618   79367 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:38:07.061557   79367 kubeconfig.go:125] found "no-preload-545149" server: "https://192.168.39.162:8443"
	I0814 17:38:07.063554   79367 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:38:07.072026   79367 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I0814 17:38:07.072064   79367 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:38:07.072075   79367 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:38:07.072117   79367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:38:07.109349   79367 cri.go:89] found id: ""
	I0814 17:38:07.109412   79367 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:38:07.126888   79367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:38:07.138272   79367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:38:07.138293   79367 kubeadm.go:157] found existing configuration files:
	
	I0814 17:38:07.138367   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:38:07.147160   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:38:07.147220   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:38:07.156664   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:38:07.165122   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:38:07.165167   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:38:07.173478   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:38:07.181391   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:38:07.181449   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:38:07.189750   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:38:07.198215   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:38:07.198274   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:38:07.207384   79367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:38:07.216034   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:07.337710   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.227720   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.455979   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.521250   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.654574   79367 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:38:08.654684   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.155639   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.655182   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.696193   79367 api_server.go:72] duration metric: took 1.041620068s to wait for apiserver process to appear ...
	I0814 17:38:09.696223   79367 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:38:09.696241   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:09.696703   79367 api_server.go:269] stopped: https://192.168.39.162:8443/healthz: Get "https://192.168.39.162:8443/healthz": dial tcp 192.168.39.162:8443: connect: connection refused
	I0814 17:38:10.197180   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.389673   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:38:12.389702   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:38:12.389717   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.403106   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:38:12.403138   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:38:12.696486   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.700755   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:38:12.700784   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:38:13.196293   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:13.200564   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:38:13.200592   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:38:13.697253   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:13.705430   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I0814 17:38:13.732816   79367 api_server.go:141] control plane version: v1.31.0
	I0814 17:38:13.732843   79367 api_server.go:131] duration metric: took 4.036614106s to wait for apiserver health ...
	I0814 17:38:13.732852   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:38:13.732860   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:38:13.734904   79367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:38:11.444846   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:13.943583   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:12.759294   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:14.760919   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:11.640107   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.140034   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.639463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.139428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.639575   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.140005   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.639473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.140124   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.639459   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:16.139187   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.736533   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:38:13.756650   79367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:38:13.776947   79367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:38:13.803170   79367 system_pods.go:59] 8 kube-system pods found
	I0814 17:38:13.803214   79367 system_pods.go:61] "coredns-6f6b679f8f-tt46z" [169beaf0-0310-47d5-b212-9a81c6b3df68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:38:13.803228   79367 system_pods.go:61] "etcd-no-preload-545149" [47e22bb4-bedb-433f-ae2e-f281269c6e87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:38:13.803240   79367 system_pods.go:61] "kube-apiserver-no-preload-545149" [37854a66-b05b-49fe-834b-98f724087ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:38:13.803249   79367 system_pods.go:61] "kube-controller-manager-no-preload-545149" [69189ec1-6f8c-4613-bf47-46e101a14ecd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:38:13.803307   79367 system_pods.go:61] "kube-proxy-gfrqp" [2206243d-f6e0-462c-969c-60e192038700] Running
	I0814 17:38:13.803314   79367 system_pods.go:61] "kube-scheduler-no-preload-545149" [0bbd41bd-0a18-486b-b78c-9a0e9efe209a] Running
	I0814 17:38:13.803322   79367 system_pods.go:61] "metrics-server-6867b74b74-8c2cx" [b30f3018-f316-4997-a8fa-ff6c83aa7dd7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:38:13.803341   79367 system_pods.go:61] "storage-provisioner" [635027cc-ac5d-4474-a243-ef48b3580998] Running
	I0814 17:38:13.803349   79367 system_pods.go:74] duration metric: took 26.377795ms to wait for pod list to return data ...
	I0814 17:38:13.803357   79367 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:38:13.814093   79367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:38:13.814120   79367 node_conditions.go:123] node cpu capacity is 2
	I0814 17:38:13.814131   79367 node_conditions.go:105] duration metric: took 10.768606ms to run NodePressure ...
	I0814 17:38:13.814147   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:14.196481   79367 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:38:14.202205   79367 kubeadm.go:739] kubelet initialised
	I0814 17:38:14.202239   79367 kubeadm.go:740] duration metric: took 5.723699ms waiting for restarted kubelet to initialise ...
	I0814 17:38:14.202250   79367 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:38:14.209431   79367 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.215568   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.215597   79367 pod_ready.go:81] duration metric: took 6.13175ms for pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.215610   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.215620   79367 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.227611   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "etcd-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.227647   79367 pod_ready.go:81] duration metric: took 12.016107ms for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.227661   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "etcd-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.227669   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.235095   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "kube-apiserver-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.235130   79367 pod_ready.go:81] duration metric: took 7.452418ms for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.235143   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "kube-apiserver-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.235153   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.244417   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.244447   79367 pod_ready.go:81] duration metric: took 9.283911ms for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.244459   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.244466   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gfrqp" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.999946   79367 pod_ready.go:92] pod "kube-proxy-gfrqp" in "kube-system" namespace has status "Ready":"True"
	I0814 17:38:14.999968   79367 pod_ready.go:81] duration metric: took 755.491905ms for pod "kube-proxy-gfrqp" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.999977   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:15.945421   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:18.444758   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:16.761265   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:19.260117   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:16.639219   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.139463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.639839   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.140251   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.639890   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.139999   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.639652   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.139316   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.639809   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:21.139471   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.005796   79367 pod_ready.go:102] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:19.006769   79367 pod_ready.go:102] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:20.506792   79367 pod_ready.go:92] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:38:20.506815   79367 pod_ready.go:81] duration metric: took 5.50683258s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:20.506825   79367 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:20.445449   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:22.446622   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:24.943859   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:21.760870   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:23.761708   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:21.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.139292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.640151   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.139450   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.639996   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.139447   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.639267   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.139595   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.639451   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:26.140190   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.513577   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:25.012936   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.945216   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:29.444769   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.260276   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:28.263789   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.640120   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.140141   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.640184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.139896   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.140246   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.639895   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.139860   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.639358   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:31.140029   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.014354   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:29.516049   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:31.944967   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:34.444885   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:30.760174   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:33.259870   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:35.260137   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:31.639317   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.140039   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.139240   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.640181   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.139789   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.639297   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.139871   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.639347   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:36.140044   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.013464   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:34.513348   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.513741   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.944347   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:38.945374   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:37.759866   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:39.760334   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.640132   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.139254   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.639457   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.139928   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.639196   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:39.139906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:39.139980   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:39.179494   80228 cri.go:89] found id: ""
	I0814 17:38:39.179524   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.179535   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:39.179543   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:39.179619   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:39.210704   80228 cri.go:89] found id: ""
	I0814 17:38:39.210732   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.210741   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:39.210746   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:39.210796   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:39.247562   80228 cri.go:89] found id: ""
	I0814 17:38:39.247590   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.247597   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:39.247603   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:39.247665   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:39.281456   80228 cri.go:89] found id: ""
	I0814 17:38:39.281480   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.281488   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:39.281494   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:39.281553   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:39.318588   80228 cri.go:89] found id: ""
	I0814 17:38:39.318620   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.318630   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:39.318638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:39.318695   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:39.350270   80228 cri.go:89] found id: ""
	I0814 17:38:39.350294   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.350303   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:39.350311   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:39.350387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:39.382168   80228 cri.go:89] found id: ""
	I0814 17:38:39.382198   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.382209   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:39.382215   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:39.382325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:39.415307   80228 cri.go:89] found id: ""
	I0814 17:38:39.415342   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.415354   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:39.415375   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:39.415388   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:39.469591   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:39.469632   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:39.482909   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:39.482942   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:39.609874   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:39.609906   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:39.609921   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:39.683210   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:39.683253   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:39.013876   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:41.513527   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:41.444286   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:43.444539   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:42.260548   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:44.263171   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:42.222687   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:42.235017   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:42.235088   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:42.285518   80228 cri.go:89] found id: ""
	I0814 17:38:42.285544   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.285553   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:42.285559   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:42.285614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:42.320462   80228 cri.go:89] found id: ""
	I0814 17:38:42.320492   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.320500   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:42.320506   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:42.320594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:42.353484   80228 cri.go:89] found id: ""
	I0814 17:38:42.353515   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.353528   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:42.353537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:42.353614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:42.388122   80228 cri.go:89] found id: ""
	I0814 17:38:42.388152   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.388163   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:42.388171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:42.388239   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:42.420246   80228 cri.go:89] found id: ""
	I0814 17:38:42.420275   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.420285   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:42.420293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:42.420359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:42.454636   80228 cri.go:89] found id: ""
	I0814 17:38:42.454669   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.454680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:42.454687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:42.454749   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:42.494638   80228 cri.go:89] found id: ""
	I0814 17:38:42.494670   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.494679   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:42.494686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:42.494751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:42.532224   80228 cri.go:89] found id: ""
	I0814 17:38:42.532257   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.532269   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:42.532281   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:42.532296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:42.546100   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:42.546132   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:42.616561   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:42.616589   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:42.616604   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:42.697269   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:42.697305   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:42.737787   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:42.737821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.293788   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:45.309020   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:45.309080   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:45.349218   80228 cri.go:89] found id: ""
	I0814 17:38:45.349246   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.349254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:45.349260   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:45.349318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:45.387622   80228 cri.go:89] found id: ""
	I0814 17:38:45.387653   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.387664   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:45.387672   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:45.387750   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:45.422120   80228 cri.go:89] found id: ""
	I0814 17:38:45.422154   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.422164   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:45.422169   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:45.422226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:45.457309   80228 cri.go:89] found id: ""
	I0814 17:38:45.457337   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.457352   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:45.457361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:45.457412   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:45.488969   80228 cri.go:89] found id: ""
	I0814 17:38:45.489000   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.489011   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:45.489019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:45.489081   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:45.522230   80228 cri.go:89] found id: ""
	I0814 17:38:45.522258   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.522273   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:45.522280   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:45.522345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:45.555394   80228 cri.go:89] found id: ""
	I0814 17:38:45.555425   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.555440   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:45.555448   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:45.555520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:45.587870   80228 cri.go:89] found id: ""
	I0814 17:38:45.587899   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.587910   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:45.587934   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:45.587951   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.638662   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:45.638709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:45.652217   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:45.652248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:45.733611   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:45.733635   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:45.733648   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:45.822733   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:45.822773   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:44.013405   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:46.014164   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:45.445289   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:47.944672   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:46.760279   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:49.260108   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:48.361519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:48.374848   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:48.374916   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:48.410849   80228 cri.go:89] found id: ""
	I0814 17:38:48.410897   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.410911   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:48.410920   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:48.410986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:48.448507   80228 cri.go:89] found id: ""
	I0814 17:38:48.448530   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.448537   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:48.448543   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:48.448594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:48.486257   80228 cri.go:89] found id: ""
	I0814 17:38:48.486298   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.486306   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:48.486312   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:48.486363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:48.520447   80228 cri.go:89] found id: ""
	I0814 17:38:48.520473   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.520482   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:48.520487   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:48.520544   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:48.552659   80228 cri.go:89] found id: ""
	I0814 17:38:48.552690   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.552698   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:48.552704   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:48.552768   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:48.585302   80228 cri.go:89] found id: ""
	I0814 17:38:48.585331   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.585341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:48.585348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:48.585415   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:48.617388   80228 cri.go:89] found id: ""
	I0814 17:38:48.617417   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.617428   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:48.617435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:48.617503   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:48.658987   80228 cri.go:89] found id: ""
	I0814 17:38:48.659012   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.659019   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:48.659027   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:48.659041   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:48.719882   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:48.719915   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:48.738962   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:48.738994   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:48.807703   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:48.807727   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:48.807739   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:48.886555   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:48.886585   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:48.514199   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.013627   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:50.444135   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:52.444957   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:54.446434   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.760518   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:54.260283   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.423653   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:51.436700   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:51.436792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:51.473198   80228 cri.go:89] found id: ""
	I0814 17:38:51.473227   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.473256   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:51.473262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:51.473311   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:51.508631   80228 cri.go:89] found id: ""
	I0814 17:38:51.508664   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.508675   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:51.508682   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:51.508743   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:51.540917   80228 cri.go:89] found id: ""
	I0814 17:38:51.540950   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.540958   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:51.540965   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:51.541014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:51.578112   80228 cri.go:89] found id: ""
	I0814 17:38:51.578140   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.578150   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:51.578158   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:51.578220   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:51.612664   80228 cri.go:89] found id: ""
	I0814 17:38:51.612692   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.612700   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:51.612706   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:51.612756   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:51.646374   80228 cri.go:89] found id: ""
	I0814 17:38:51.646399   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.646407   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:51.646413   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:51.646463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:51.682052   80228 cri.go:89] found id: ""
	I0814 17:38:51.682081   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.682092   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:51.682098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:51.682147   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:51.722625   80228 cri.go:89] found id: ""
	I0814 17:38:51.722653   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.722663   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:51.722674   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:51.722687   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:51.771788   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:51.771818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:51.785403   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:51.785432   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:51.854249   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:51.854269   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:51.854281   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:51.938121   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:51.938155   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.475672   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:54.491309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:54.491399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:54.524971   80228 cri.go:89] found id: ""
	I0814 17:38:54.525001   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.525011   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:54.525023   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:54.525087   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:54.560631   80228 cri.go:89] found id: ""
	I0814 17:38:54.560661   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.560670   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:54.560675   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:54.560728   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:54.595710   80228 cri.go:89] found id: ""
	I0814 17:38:54.595740   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.595751   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:54.595759   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:54.595824   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:54.631449   80228 cri.go:89] found id: ""
	I0814 17:38:54.631476   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.631487   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:54.631495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:54.631557   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:54.666492   80228 cri.go:89] found id: ""
	I0814 17:38:54.666526   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.666539   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:54.666548   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:54.666617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:54.701111   80228 cri.go:89] found id: ""
	I0814 17:38:54.701146   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.701158   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:54.701166   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:54.701226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:54.737535   80228 cri.go:89] found id: ""
	I0814 17:38:54.737574   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.737585   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:54.737595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:54.737653   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:54.771658   80228 cri.go:89] found id: ""
	I0814 17:38:54.771679   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.771686   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:54.771694   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:54.771709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:54.841798   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:54.841817   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:54.841829   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:54.930861   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:54.930917   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.970508   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:54.970540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:55.023077   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:55.023123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:53.513137   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.014005   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.945376   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:59.445437   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.260436   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:58.759613   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:57.538876   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:57.551796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:57.551868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:57.584576   80228 cri.go:89] found id: ""
	I0814 17:38:57.584601   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.584609   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:57.584617   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:57.584687   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:57.617209   80228 cri.go:89] found id: ""
	I0814 17:38:57.617239   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.617249   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:57.617257   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:57.617338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:57.650062   80228 cri.go:89] found id: ""
	I0814 17:38:57.650089   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.650096   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:57.650102   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:57.650160   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:57.681118   80228 cri.go:89] found id: ""
	I0814 17:38:57.681146   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.681154   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:57.681160   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:57.681228   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:57.713803   80228 cri.go:89] found id: ""
	I0814 17:38:57.713834   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.713842   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:57.713851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:57.713920   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:57.749555   80228 cri.go:89] found id: ""
	I0814 17:38:57.749594   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.749604   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:57.749613   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:57.749677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:57.782714   80228 cri.go:89] found id: ""
	I0814 17:38:57.782744   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.782755   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:57.782763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:57.782826   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:57.815386   80228 cri.go:89] found id: ""
	I0814 17:38:57.815414   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.815423   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:57.815436   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:57.815450   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:57.868153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:57.868183   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:57.881022   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:57.881053   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:57.950474   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:57.950501   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:57.950515   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:58.032611   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:58.032644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:00.569493   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:00.583257   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:00.583384   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:00.614680   80228 cri.go:89] found id: ""
	I0814 17:39:00.614712   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.614723   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:00.614732   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:00.614792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:00.648161   80228 cri.go:89] found id: ""
	I0814 17:39:00.648189   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.648196   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:00.648203   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:00.648256   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:00.681844   80228 cri.go:89] found id: ""
	I0814 17:39:00.681872   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.681883   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:00.681890   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:00.681952   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:00.714773   80228 cri.go:89] found id: ""
	I0814 17:39:00.714804   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.714815   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:00.714823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:00.714891   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:00.747748   80228 cri.go:89] found id: ""
	I0814 17:39:00.747774   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.747781   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:00.747787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:00.747845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:00.783135   80228 cri.go:89] found id: ""
	I0814 17:39:00.783168   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.783179   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:00.783186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:00.783242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:00.817505   80228 cri.go:89] found id: ""
	I0814 17:39:00.817541   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.817552   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:00.817567   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:00.817633   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:00.849205   80228 cri.go:89] found id: ""
	I0814 17:39:00.849231   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.849241   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:00.849252   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:00.849273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:00.902529   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:00.902567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:00.916313   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:00.916346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:00.988708   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:00.988725   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:00.988737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:01.063818   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:01.063853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:58.512313   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:00.513694   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:01.944987   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.945640   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:00.759979   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.259928   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.603241   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:03.616400   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:03.616504   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:03.649580   80228 cri.go:89] found id: ""
	I0814 17:39:03.649619   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.649637   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:03.649650   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:03.649718   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:03.686252   80228 cri.go:89] found id: ""
	I0814 17:39:03.686274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.686282   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:03.686289   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:03.686349   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:03.720995   80228 cri.go:89] found id: ""
	I0814 17:39:03.721024   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.721036   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:03.721043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:03.721094   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:03.753466   80228 cri.go:89] found id: ""
	I0814 17:39:03.753491   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.753500   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:03.753506   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:03.753554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:03.794427   80228 cri.go:89] found id: ""
	I0814 17:39:03.794450   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.794458   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:03.794464   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:03.794524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:03.826245   80228 cri.go:89] found id: ""
	I0814 17:39:03.826274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.826282   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:03.826288   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:03.826355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:03.857208   80228 cri.go:89] found id: ""
	I0814 17:39:03.857238   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.857247   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:03.857253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:03.857325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:03.892840   80228 cri.go:89] found id: ""
	I0814 17:39:03.892864   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.892871   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:03.892879   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:03.892891   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:03.948554   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:03.948579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:03.962222   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:03.962249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:04.031833   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:04.031859   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:04.031875   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:04.109572   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:04.109636   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:03.013542   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:05.513201   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:06.444222   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:08.444833   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:05.759653   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:07.760063   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.259961   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:06.646923   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:06.659699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:06.659757   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:06.691918   80228 cri.go:89] found id: ""
	I0814 17:39:06.691941   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.691951   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:06.691958   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:06.692016   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:06.722675   80228 cri.go:89] found id: ""
	I0814 17:39:06.722703   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.722713   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:06.722720   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:06.722782   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:06.757215   80228 cri.go:89] found id: ""
	I0814 17:39:06.757248   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.757259   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:06.757266   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:06.757333   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:06.791337   80228 cri.go:89] found id: ""
	I0814 17:39:06.791370   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.791378   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:06.791384   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:06.791440   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:06.825182   80228 cri.go:89] found id: ""
	I0814 17:39:06.825209   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.825220   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:06.825234   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:06.825288   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:06.857473   80228 cri.go:89] found id: ""
	I0814 17:39:06.857498   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.857507   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:06.857514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:06.857582   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:06.891293   80228 cri.go:89] found id: ""
	I0814 17:39:06.891343   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.891355   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:06.891363   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:06.891421   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:06.927476   80228 cri.go:89] found id: ""
	I0814 17:39:06.927505   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.927516   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:06.927527   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:06.927541   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:06.980604   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:06.980635   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:06.994648   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:06.994678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:07.072554   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:07.072580   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:07.072599   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:07.153141   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:07.153182   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:09.693348   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:09.705754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:09.705814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:09.739674   80228 cri.go:89] found id: ""
	I0814 17:39:09.739706   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.739717   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:09.739724   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:09.739788   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:09.774381   80228 cri.go:89] found id: ""
	I0814 17:39:09.774405   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.774413   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:09.774420   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:09.774478   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:09.806586   80228 cri.go:89] found id: ""
	I0814 17:39:09.806614   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.806623   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:09.806629   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:09.806696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:09.839564   80228 cri.go:89] found id: ""
	I0814 17:39:09.839594   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.839602   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:09.839614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:09.839672   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:09.872338   80228 cri.go:89] found id: ""
	I0814 17:39:09.872373   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.872385   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:09.872393   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:09.872457   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:09.904184   80228 cri.go:89] found id: ""
	I0814 17:39:09.904223   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.904231   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:09.904253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:09.904312   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:09.937217   80228 cri.go:89] found id: ""
	I0814 17:39:09.937242   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.937251   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:09.937259   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:09.937322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:09.972273   80228 cri.go:89] found id: ""
	I0814 17:39:09.972301   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.972313   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:09.972325   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:09.972341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:10.023736   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:10.023764   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:10.036654   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:10.036681   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:10.104031   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:10.104052   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:10.104068   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:10.187816   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:10.187853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:08.013632   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.513090   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.944491   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.945542   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.260129   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:14.759867   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.727237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:12.741970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:12.742041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:12.778721   80228 cri.go:89] found id: ""
	I0814 17:39:12.778748   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.778758   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:12.778765   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:12.778820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:12.812575   80228 cri.go:89] found id: ""
	I0814 17:39:12.812603   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.812610   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:12.812619   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:12.812678   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:12.845697   80228 cri.go:89] found id: ""
	I0814 17:39:12.845726   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.845737   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:12.845744   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:12.845809   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:12.879491   80228 cri.go:89] found id: ""
	I0814 17:39:12.879518   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.879529   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:12.879536   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:12.879604   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:12.912321   80228 cri.go:89] found id: ""
	I0814 17:39:12.912348   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.912356   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:12.912361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:12.912410   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:12.948866   80228 cri.go:89] found id: ""
	I0814 17:39:12.948889   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.948897   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:12.948903   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:12.948963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:12.983394   80228 cri.go:89] found id: ""
	I0814 17:39:12.983444   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.983459   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:12.983466   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:12.983530   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:13.018406   80228 cri.go:89] found id: ""
	I0814 17:39:13.018427   80228 logs.go:276] 0 containers: []
	W0814 17:39:13.018434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:13.018442   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:13.018457   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.069615   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:13.069655   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:13.082618   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:13.082651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:13.145033   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:13.145054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:13.145067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:13.225074   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:13.225108   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:15.765512   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:15.778320   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:15.778380   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:15.812847   80228 cri.go:89] found id: ""
	I0814 17:39:15.812876   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.812885   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:15.812896   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:15.812944   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:15.845131   80228 cri.go:89] found id: ""
	I0814 17:39:15.845159   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.845169   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:15.845176   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:15.845242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:15.879763   80228 cri.go:89] found id: ""
	I0814 17:39:15.879789   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.879799   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:15.879807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:15.879864   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:15.912746   80228 cri.go:89] found id: ""
	I0814 17:39:15.912776   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.912784   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:15.912797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:15.912858   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:15.946433   80228 cri.go:89] found id: ""
	I0814 17:39:15.946456   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.946465   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:15.946473   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:15.946534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:15.980060   80228 cri.go:89] found id: ""
	I0814 17:39:15.980086   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.980096   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:15.980103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:15.980167   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:16.011539   80228 cri.go:89] found id: ""
	I0814 17:39:16.011570   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.011581   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:16.011590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:16.011660   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:16.046019   80228 cri.go:89] found id: ""
	I0814 17:39:16.046046   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.046057   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:16.046068   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:16.046083   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:16.058442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:16.058470   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:16.132775   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:16.132799   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:16.132811   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:16.218360   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:16.218398   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:16.258070   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:16.258096   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.013275   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:15.013967   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:15.444280   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:17.444827   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.943845   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:16.760773   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.259891   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:18.813127   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:18.826187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:18.826267   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:18.858405   80228 cri.go:89] found id: ""
	I0814 17:39:18.858433   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.858444   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:18.858452   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:18.858524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:18.893302   80228 cri.go:89] found id: ""
	I0814 17:39:18.893335   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.893342   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:18.893350   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:18.893417   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:18.929885   80228 cri.go:89] found id: ""
	I0814 17:39:18.929919   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.929929   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:18.929937   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:18.930000   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:18.966758   80228 cri.go:89] found id: ""
	I0814 17:39:18.966783   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.966792   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:18.966799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:18.966861   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:18.999815   80228 cri.go:89] found id: ""
	I0814 17:39:18.999838   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.999845   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:18.999851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:18.999915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:19.033737   80228 cri.go:89] found id: ""
	I0814 17:39:19.033761   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.033768   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:19.033774   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:19.033830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:19.070080   80228 cri.go:89] found id: ""
	I0814 17:39:19.070105   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.070113   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:19.070119   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:19.070190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:19.102868   80228 cri.go:89] found id: ""
	I0814 17:39:19.102897   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.102907   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:19.102918   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:19.102932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:19.156525   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:19.156569   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:19.170193   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:19.170225   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:19.236521   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:19.236547   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:19.236561   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:19.315984   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:19.316024   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:17.512553   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.513046   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.513082   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:22.444948   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:24.945111   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.260362   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:23.260567   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.855554   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:21.868457   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:21.868527   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:21.902098   80228 cri.go:89] found id: ""
	I0814 17:39:21.902124   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.902132   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:21.902139   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:21.902200   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:21.934876   80228 cri.go:89] found id: ""
	I0814 17:39:21.934908   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.934919   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:21.934926   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:21.934987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:21.976507   80228 cri.go:89] found id: ""
	I0814 17:39:21.976536   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.976548   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:21.976555   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:21.976617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:22.013876   80228 cri.go:89] found id: ""
	I0814 17:39:22.013897   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.013904   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:22.013909   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:22.013955   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:22.051943   80228 cri.go:89] found id: ""
	I0814 17:39:22.051969   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.051979   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:22.051999   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:22.052064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:22.084702   80228 cri.go:89] found id: ""
	I0814 17:39:22.084725   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.084733   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:22.084738   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:22.084784   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:22.117397   80228 cri.go:89] found id: ""
	I0814 17:39:22.117424   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.117432   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:22.117439   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:22.117490   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:22.154139   80228 cri.go:89] found id: ""
	I0814 17:39:22.154168   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.154178   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:22.154189   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:22.154201   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:22.205550   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:22.205580   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:22.219644   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:22.219679   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:22.288934   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:22.288957   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:22.288969   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:22.372917   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:22.372954   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:24.912578   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:24.925365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:24.925430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:24.961207   80228 cri.go:89] found id: ""
	I0814 17:39:24.961234   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.961248   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:24.961257   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:24.961339   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:24.998878   80228 cri.go:89] found id: ""
	I0814 17:39:24.998904   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.998911   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:24.998918   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:24.998971   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:25.034141   80228 cri.go:89] found id: ""
	I0814 17:39:25.034174   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.034187   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:25.034196   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:25.034274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:25.075634   80228 cri.go:89] found id: ""
	I0814 17:39:25.075667   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.075679   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:25.075688   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:25.075759   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:25.112890   80228 cri.go:89] found id: ""
	I0814 17:39:25.112929   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.112939   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:25.112946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:25.113007   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:25.152887   80228 cri.go:89] found id: ""
	I0814 17:39:25.152913   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.152921   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:25.152927   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:25.152987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:25.186421   80228 cri.go:89] found id: ""
	I0814 17:39:25.186452   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.186463   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:25.186471   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:25.186537   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:25.220390   80228 cri.go:89] found id: ""
	I0814 17:39:25.220417   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.220425   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:25.220432   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:25.220446   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:25.296112   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:25.296146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:25.335421   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:25.335449   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:25.387690   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:25.387718   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:25.401926   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:25.401957   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:25.471111   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:24.012534   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:26.513529   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.445280   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:29.445416   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:25.759098   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.759924   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:30.259610   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.972237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:27.985512   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:27.985575   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:28.019454   80228 cri.go:89] found id: ""
	I0814 17:39:28.019482   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.019493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:28.019502   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:28.019566   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:28.056908   80228 cri.go:89] found id: ""
	I0814 17:39:28.056931   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.056939   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:28.056944   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:28.056998   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:28.090678   80228 cri.go:89] found id: ""
	I0814 17:39:28.090707   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.090715   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:28.090721   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:28.090785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:28.125557   80228 cri.go:89] found id: ""
	I0814 17:39:28.125591   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.125609   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:28.125620   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:28.125682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:28.158092   80228 cri.go:89] found id: ""
	I0814 17:39:28.158121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.158129   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:28.158135   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:28.158191   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:28.193403   80228 cri.go:89] found id: ""
	I0814 17:39:28.193434   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.193445   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:28.193454   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:28.193524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:28.231095   80228 cri.go:89] found id: ""
	I0814 17:39:28.231121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.231131   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:28.231139   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:28.231203   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:28.280157   80228 cri.go:89] found id: ""
	I0814 17:39:28.280185   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.280196   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:28.280207   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:28.280220   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:28.352877   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:28.352894   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:28.352906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:28.439692   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:28.439736   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:28.479986   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:28.480012   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:28.538454   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:28.538493   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.052941   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:31.065810   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:31.065879   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:31.097988   80228 cri.go:89] found id: ""
	I0814 17:39:31.098013   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.098020   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:31.098045   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:31.098102   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:31.130126   80228 cri.go:89] found id: ""
	I0814 17:39:31.130152   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.130160   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:31.130166   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:31.130225   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:31.165945   80228 cri.go:89] found id: ""
	I0814 17:39:31.165984   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.165995   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:31.166003   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:31.166064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:31.199749   80228 cri.go:89] found id: ""
	I0814 17:39:31.199772   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.199778   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:31.199784   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:31.199843   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:31.231398   80228 cri.go:89] found id: ""
	I0814 17:39:31.231425   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.231436   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:31.231444   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:31.231528   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:31.263842   80228 cri.go:89] found id: ""
	I0814 17:39:31.263868   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.263878   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:31.263885   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:31.263950   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:31.299258   80228 cri.go:89] found id: ""
	I0814 17:39:31.299289   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.299301   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:31.299309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:31.299399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:29.013468   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.013638   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.445769   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:33.944939   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:32.260117   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:34.262303   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.332626   80228 cri.go:89] found id: ""
	I0814 17:39:31.332649   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.332657   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:31.332666   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:31.332678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:31.369262   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:31.369288   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:31.426003   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:31.426034   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.439583   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:31.439611   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:31.507538   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:31.507563   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:31.507583   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.085342   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:34.097491   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:34.097567   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:34.129220   80228 cri.go:89] found id: ""
	I0814 17:39:34.129244   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.129254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:34.129262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:34.129322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:34.161233   80228 cri.go:89] found id: ""
	I0814 17:39:34.161256   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.161264   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:34.161270   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:34.161334   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:34.193649   80228 cri.go:89] found id: ""
	I0814 17:39:34.193675   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.193683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:34.193689   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:34.193754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:34.226722   80228 cri.go:89] found id: ""
	I0814 17:39:34.226753   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.226763   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:34.226772   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:34.226842   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:34.259735   80228 cri.go:89] found id: ""
	I0814 17:39:34.259761   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.259774   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:34.259787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:34.259850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:34.296804   80228 cri.go:89] found id: ""
	I0814 17:39:34.296830   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.296838   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:34.296844   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:34.296894   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:34.328942   80228 cri.go:89] found id: ""
	I0814 17:39:34.328973   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.328982   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:34.328988   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:34.329041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:34.360820   80228 cri.go:89] found id: ""
	I0814 17:39:34.360847   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.360858   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:34.360868   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:34.360882   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:34.411106   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:34.411142   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:34.424737   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:34.424769   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:34.489094   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:34.489122   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:34.489138   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.569783   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:34.569818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:33.015308   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:35.513073   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:35.945264   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:38.444913   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:36.760740   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:39.260499   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:37.107492   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:37.120829   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:37.120901   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:37.154556   80228 cri.go:89] found id: ""
	I0814 17:39:37.154589   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.154601   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:37.154609   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:37.154673   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:37.192570   80228 cri.go:89] found id: ""
	I0814 17:39:37.192602   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.192609   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:37.192615   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:37.192679   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:37.225845   80228 cri.go:89] found id: ""
	I0814 17:39:37.225891   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.225902   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:37.225917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:37.225986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:37.262370   80228 cri.go:89] found id: ""
	I0814 17:39:37.262399   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.262408   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:37.262416   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:37.262481   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:37.297642   80228 cri.go:89] found id: ""
	I0814 17:39:37.297669   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.297680   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:37.297687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:37.297754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:37.331006   80228 cri.go:89] found id: ""
	I0814 17:39:37.331032   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.331041   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:37.331046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:37.331111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:37.364753   80228 cri.go:89] found id: ""
	I0814 17:39:37.364777   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.364786   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:37.364792   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:37.364850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:37.397722   80228 cri.go:89] found id: ""
	I0814 17:39:37.397748   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.397760   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:37.397770   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:37.397785   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:37.473616   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:37.473643   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:37.473659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:37.557672   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:37.557710   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.596337   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:37.596368   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:37.646815   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:37.646853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.160391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:40.174099   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:40.174181   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:40.208783   80228 cri.go:89] found id: ""
	I0814 17:39:40.208814   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.208821   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:40.208829   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:40.208880   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:40.243555   80228 cri.go:89] found id: ""
	I0814 17:39:40.243580   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.243588   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:40.243594   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:40.243661   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:40.276685   80228 cri.go:89] found id: ""
	I0814 17:39:40.276711   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.276723   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:40.276731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:40.276795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:40.309893   80228 cri.go:89] found id: ""
	I0814 17:39:40.309925   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.309937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:40.309944   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:40.310073   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:40.341724   80228 cri.go:89] found id: ""
	I0814 17:39:40.341751   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.341762   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:40.341770   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:40.341834   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:40.376442   80228 cri.go:89] found id: ""
	I0814 17:39:40.376478   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.376487   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:40.376495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:40.376558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:40.419240   80228 cri.go:89] found id: ""
	I0814 17:39:40.419269   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.419277   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:40.419284   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:40.419374   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:40.464678   80228 cri.go:89] found id: ""
	I0814 17:39:40.464703   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.464712   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:40.464721   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:40.464737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:40.531138   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:40.531175   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.546809   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:40.546842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:40.618791   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:40.618809   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:40.618821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:40.706169   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:40.706219   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.513604   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:40.013349   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:40.445989   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:42.944417   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:41.261429   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:43.760436   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:43.250987   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:43.266109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:43.266179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:43.301860   80228 cri.go:89] found id: ""
	I0814 17:39:43.301891   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.301899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:43.301908   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:43.301991   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:43.337166   80228 cri.go:89] found id: ""
	I0814 17:39:43.337195   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.337205   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:43.337212   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:43.337262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:43.370640   80228 cri.go:89] found id: ""
	I0814 17:39:43.370671   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.370683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:43.370696   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:43.370752   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:43.405598   80228 cri.go:89] found id: ""
	I0814 17:39:43.405624   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.405632   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:43.405638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:43.405705   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:43.437161   80228 cri.go:89] found id: ""
	I0814 17:39:43.437184   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.437192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:43.437198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:43.437295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:43.470675   80228 cri.go:89] found id: ""
	I0814 17:39:43.470707   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.470718   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:43.470726   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:43.470787   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:43.503036   80228 cri.go:89] found id: ""
	I0814 17:39:43.503062   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.503073   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:43.503081   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:43.503149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:43.538269   80228 cri.go:89] found id: ""
	I0814 17:39:43.538296   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.538304   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:43.538328   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:43.538340   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:43.621889   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:43.621936   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:43.667460   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:43.667491   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:43.723630   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:43.723663   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:43.738905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:43.738939   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:43.805484   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:46.306031   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:42.512438   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:44.513112   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.513203   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:45.445470   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:47.944790   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.260236   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:48.260662   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.324624   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:46.324696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:46.360039   80228 cri.go:89] found id: ""
	I0814 17:39:46.360066   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.360074   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:46.360082   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:46.360131   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:46.413735   80228 cri.go:89] found id: ""
	I0814 17:39:46.413767   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.413779   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:46.413788   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:46.413876   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:46.458823   80228 cri.go:89] found id: ""
	I0814 17:39:46.458851   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.458861   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:46.458869   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:46.458928   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:46.495347   80228 cri.go:89] found id: ""
	I0814 17:39:46.495378   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.495387   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:46.495392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:46.495441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:46.531502   80228 cri.go:89] found id: ""
	I0814 17:39:46.531533   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.531545   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:46.531554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:46.531624   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:46.564450   80228 cri.go:89] found id: ""
	I0814 17:39:46.564473   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.564482   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:46.564488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:46.564535   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:46.598293   80228 cri.go:89] found id: ""
	I0814 17:39:46.598401   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.598421   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:46.598431   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:46.598498   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:46.632370   80228 cri.go:89] found id: ""
	I0814 17:39:46.632400   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.632411   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:46.632423   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:46.632438   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:46.711814   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:46.711848   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:46.749410   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:46.749443   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:46.801686   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:46.801720   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:46.815196   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:46.815218   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:46.885648   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.386223   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:49.399359   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:49.399430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:49.432133   80228 cri.go:89] found id: ""
	I0814 17:39:49.432168   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.432179   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:49.432186   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:49.432250   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:49.469760   80228 cri.go:89] found id: ""
	I0814 17:39:49.469790   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.469799   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:49.469811   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:49.469873   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:49.500437   80228 cri.go:89] found id: ""
	I0814 17:39:49.500466   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.500474   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:49.500481   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:49.500531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:49.533685   80228 cri.go:89] found id: ""
	I0814 17:39:49.533709   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.533717   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:49.533723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:49.533790   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:49.570551   80228 cri.go:89] found id: ""
	I0814 17:39:49.570577   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.570584   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:49.570590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:49.570654   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:49.606649   80228 cri.go:89] found id: ""
	I0814 17:39:49.606672   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.606680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:49.606686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:49.606734   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:49.638060   80228 cri.go:89] found id: ""
	I0814 17:39:49.638090   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.638101   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:49.638109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:49.638178   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:49.674503   80228 cri.go:89] found id: ""
	I0814 17:39:49.674526   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.674534   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:49.674543   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:49.674563   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:49.710185   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:49.710213   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:49.764112   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:49.764146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:49.777862   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:49.777888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:49.849786   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.849806   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:49.849819   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:48.513418   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:51.013242   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:50.444526   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.444788   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:54.944646   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:50.759890   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.760236   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:54.760324   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.429811   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:52.444364   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:52.444441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:52.483047   80228 cri.go:89] found id: ""
	I0814 17:39:52.483074   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.483085   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:52.483093   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:52.483157   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:52.520236   80228 cri.go:89] found id: ""
	I0814 17:39:52.520264   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.520274   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:52.520287   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:52.520353   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:52.553757   80228 cri.go:89] found id: ""
	I0814 17:39:52.553784   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.553795   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:52.553802   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:52.553869   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:52.588782   80228 cri.go:89] found id: ""
	I0814 17:39:52.588808   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.588818   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:52.588827   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:52.588893   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:52.620144   80228 cri.go:89] found id: ""
	I0814 17:39:52.620180   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.620192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:52.620201   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:52.620274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:52.652712   80228 cri.go:89] found id: ""
	I0814 17:39:52.652743   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.652755   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:52.652763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:52.652825   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:52.687789   80228 cri.go:89] found id: ""
	I0814 17:39:52.687819   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.687831   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:52.687838   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:52.687892   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:52.718996   80228 cri.go:89] found id: ""
	I0814 17:39:52.719021   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.719031   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:52.719041   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:52.719055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:52.775775   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:52.775808   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:52.789024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:52.789055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:52.863320   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:52.863351   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:52.863366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:52.941533   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:52.941571   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:55.477833   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:55.490723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:55.490783   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:55.525816   80228 cri.go:89] found id: ""
	I0814 17:39:55.525844   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.525852   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:55.525859   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:55.525908   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:55.561855   80228 cri.go:89] found id: ""
	I0814 17:39:55.561878   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.561887   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:55.561892   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:55.561949   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:55.599997   80228 cri.go:89] found id: ""
	I0814 17:39:55.600027   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.600038   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:55.600046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:55.600112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:55.632869   80228 cri.go:89] found id: ""
	I0814 17:39:55.632902   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.632914   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:55.632922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:55.632990   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:55.666029   80228 cri.go:89] found id: ""
	I0814 17:39:55.666055   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.666066   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:55.666079   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:55.666136   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:55.697222   80228 cri.go:89] found id: ""
	I0814 17:39:55.697247   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.697254   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:55.697260   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:55.697308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:55.729517   80228 cri.go:89] found id: ""
	I0814 17:39:55.729549   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.729561   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:55.729576   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:55.729640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:55.763890   80228 cri.go:89] found id: ""
	I0814 17:39:55.763922   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.763934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:55.763944   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:55.763960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:55.819588   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:55.819624   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:55.833281   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:55.833314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:55.904610   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:55.904632   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:55.904644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:55.981035   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:55.981069   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:53.513407   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:55.513734   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:56.945649   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:59.444937   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:57.259832   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:59.760669   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:58.522870   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:58.536151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:58.536224   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:58.568827   80228 cri.go:89] found id: ""
	I0814 17:39:58.568857   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.568869   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:58.568877   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:58.568946   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:58.600523   80228 cri.go:89] found id: ""
	I0814 17:39:58.600554   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.600564   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:58.600571   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:58.600640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:58.634201   80228 cri.go:89] found id: ""
	I0814 17:39:58.634232   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.634240   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:58.634245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:58.634308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:58.668746   80228 cri.go:89] found id: ""
	I0814 17:39:58.668772   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.668781   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:58.668787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:58.668847   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:58.699695   80228 cri.go:89] found id: ""
	I0814 17:39:58.699727   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.699739   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:58.699752   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:58.699836   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:58.731047   80228 cri.go:89] found id: ""
	I0814 17:39:58.731081   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.731095   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:58.731103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:58.731168   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:58.773454   80228 cri.go:89] found id: ""
	I0814 17:39:58.773486   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.773495   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:58.773501   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:58.773561   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:58.810135   80228 cri.go:89] found id: ""
	I0814 17:39:58.810159   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.810166   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:58.810175   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:58.810191   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:58.844897   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:58.844925   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:58.901700   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:58.901745   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:58.914272   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:58.914296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:58.984593   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:58.984610   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:58.984622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:57.513854   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:00.013241   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:01.945861   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.444575   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:02.262241   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.760164   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:01.563227   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:01.576764   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:01.576840   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:01.610842   80228 cri.go:89] found id: ""
	I0814 17:40:01.610871   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.610878   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:01.610884   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:01.610935   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:01.643774   80228 cri.go:89] found id: ""
	I0814 17:40:01.643806   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.643816   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:01.643824   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:01.643888   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:01.677867   80228 cri.go:89] found id: ""
	I0814 17:40:01.677892   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.677899   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:01.677906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:01.677967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:01.712394   80228 cri.go:89] found id: ""
	I0814 17:40:01.712420   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.712427   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:01.712433   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:01.712492   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:01.745637   80228 cri.go:89] found id: ""
	I0814 17:40:01.745666   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.745676   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:01.745683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:01.745745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:01.782364   80228 cri.go:89] found id: ""
	I0814 17:40:01.782394   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.782404   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:01.782411   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:01.782484   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:01.814569   80228 cri.go:89] found id: ""
	I0814 17:40:01.814596   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.814605   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:01.814614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:01.814674   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:01.850421   80228 cri.go:89] found id: ""
	I0814 17:40:01.850450   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.850459   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:01.850468   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:01.850482   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:01.862965   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:01.863001   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:01.931312   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:01.931357   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:01.931375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:02.008236   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:02.008278   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:02.043238   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:02.043267   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:04.596909   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:04.610091   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:04.610158   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:04.645169   80228 cri.go:89] found id: ""
	I0814 17:40:04.645195   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.645205   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:04.645213   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:04.645279   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:04.677708   80228 cri.go:89] found id: ""
	I0814 17:40:04.677740   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.677750   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:04.677761   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:04.677823   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:04.710319   80228 cri.go:89] found id: ""
	I0814 17:40:04.710351   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.710362   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:04.710374   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:04.710443   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:04.745166   80228 cri.go:89] found id: ""
	I0814 17:40:04.745202   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.745219   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:04.745226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:04.745287   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:04.777307   80228 cri.go:89] found id: ""
	I0814 17:40:04.777354   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.777376   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:04.777383   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:04.777447   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:04.813854   80228 cri.go:89] found id: ""
	I0814 17:40:04.813886   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.813901   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:04.813908   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:04.813972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:04.848014   80228 cri.go:89] found id: ""
	I0814 17:40:04.848041   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.848049   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:04.848055   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:04.848113   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:04.882689   80228 cri.go:89] found id: ""
	I0814 17:40:04.882719   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.882731   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:04.882742   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:04.882760   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:04.952074   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:04.952096   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:04.952112   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:05.030258   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:05.030300   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:05.066509   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:05.066542   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:05.120153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:05.120195   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:02.512935   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.513254   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:06.445637   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:08.945142   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:06.760223   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:08.760857   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:07.634404   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:07.646900   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:07.646966   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:07.678654   80228 cri.go:89] found id: ""
	I0814 17:40:07.678680   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.678689   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:07.678696   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:07.678753   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:07.711355   80228 cri.go:89] found id: ""
	I0814 17:40:07.711381   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.711389   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:07.711395   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:07.711446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:07.744134   80228 cri.go:89] found id: ""
	I0814 17:40:07.744161   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.744169   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:07.744179   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:07.744242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:07.776981   80228 cri.go:89] found id: ""
	I0814 17:40:07.777008   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.777015   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:07.777022   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:07.777086   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:07.811626   80228 cri.go:89] found id: ""
	I0814 17:40:07.811651   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.811661   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:07.811667   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:07.811720   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:07.843218   80228 cri.go:89] found id: ""
	I0814 17:40:07.843251   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.843262   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:07.843270   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:07.843355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:07.875208   80228 cri.go:89] found id: ""
	I0814 17:40:07.875232   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.875239   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:07.875245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:07.875295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:07.907896   80228 cri.go:89] found id: ""
	I0814 17:40:07.907923   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.907934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:07.907945   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:07.907960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:07.959717   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:07.959753   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:07.973050   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:07.973081   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:08.035085   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:08.035107   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:08.035120   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:08.109722   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:08.109770   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:10.648203   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:10.661194   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:10.661280   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:10.698401   80228 cri.go:89] found id: ""
	I0814 17:40:10.698431   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.698442   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:10.698450   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:10.698515   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:10.730057   80228 cri.go:89] found id: ""
	I0814 17:40:10.730083   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.730094   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:10.730101   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:10.730163   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:10.768780   80228 cri.go:89] found id: ""
	I0814 17:40:10.768807   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.768817   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:10.768824   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:10.768885   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:10.800866   80228 cri.go:89] found id: ""
	I0814 17:40:10.800898   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.800907   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:10.800917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:10.800984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:10.833741   80228 cri.go:89] found id: ""
	I0814 17:40:10.833771   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.833782   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:10.833789   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:10.833850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:10.865670   80228 cri.go:89] found id: ""
	I0814 17:40:10.865699   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.865706   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:10.865717   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:10.865770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:10.904726   80228 cri.go:89] found id: ""
	I0814 17:40:10.904757   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.904765   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:10.904771   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:10.904821   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:10.940549   80228 cri.go:89] found id: ""
	I0814 17:40:10.940578   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.940588   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:10.940598   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:10.940620   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:10.992592   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:10.992622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:11.006388   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:11.006412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:11.075455   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:11.075473   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:11.075486   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:11.156622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:11.156658   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:07.012878   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:09.013908   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.512592   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.444764   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.944931   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.259959   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.760823   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.695055   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:13.709460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:13.709531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:13.741941   80228 cri.go:89] found id: ""
	I0814 17:40:13.741967   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.741975   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:13.741981   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:13.742042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:13.773916   80228 cri.go:89] found id: ""
	I0814 17:40:13.773940   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.773947   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:13.773952   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:13.773999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:13.807871   80228 cri.go:89] found id: ""
	I0814 17:40:13.807902   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.807912   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:13.807918   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:13.807981   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:13.840902   80228 cri.go:89] found id: ""
	I0814 17:40:13.840931   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.840943   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:13.840952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:13.841018   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:13.871969   80228 cri.go:89] found id: ""
	I0814 17:40:13.871998   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.872010   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:13.872019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:13.872090   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:13.905502   80228 cri.go:89] found id: ""
	I0814 17:40:13.905524   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.905531   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:13.905537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:13.905599   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:13.937356   80228 cri.go:89] found id: ""
	I0814 17:40:13.937386   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.937396   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:13.937404   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:13.937466   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:13.972383   80228 cri.go:89] found id: ""
	I0814 17:40:13.972410   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.972418   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:13.972427   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:13.972448   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:14.022691   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:14.022717   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:14.035543   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:14.035567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:14.104869   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:14.104889   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:14.104905   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:14.182185   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:14.182221   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:13.513519   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.012958   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:15.945499   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:18.445122   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.259488   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:18.259706   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.259972   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.720519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:16.734323   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:16.734406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:16.769454   80228 cri.go:89] found id: ""
	I0814 17:40:16.769483   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.769493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:16.769501   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:16.769565   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:16.801513   80228 cri.go:89] found id: ""
	I0814 17:40:16.801541   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.801548   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:16.801554   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:16.801610   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:16.835184   80228 cri.go:89] found id: ""
	I0814 17:40:16.835212   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.835220   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:16.835226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:16.835275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:16.867162   80228 cri.go:89] found id: ""
	I0814 17:40:16.867192   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.867201   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:16.867207   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:16.867257   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:16.902912   80228 cri.go:89] found id: ""
	I0814 17:40:16.902942   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.902953   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:16.902961   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:16.903026   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:16.935004   80228 cri.go:89] found id: ""
	I0814 17:40:16.935033   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.935044   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:16.935052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:16.935115   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:16.969082   80228 cri.go:89] found id: ""
	I0814 17:40:16.969110   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.969120   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:16.969127   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:16.969194   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:17.002594   80228 cri.go:89] found id: ""
	I0814 17:40:17.002622   80228 logs.go:276] 0 containers: []
	W0814 17:40:17.002633   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:17.002644   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:17.002659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:17.054319   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:17.054359   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:17.068024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:17.068048   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:17.139480   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:17.139499   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:17.139514   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:17.222086   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:17.222140   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:19.758630   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:19.772186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:19.772254   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:19.807719   80228 cri.go:89] found id: ""
	I0814 17:40:19.807751   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.807760   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:19.807766   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:19.807830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:19.851023   80228 cri.go:89] found id: ""
	I0814 17:40:19.851054   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.851067   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:19.851083   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:19.851154   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:19.882961   80228 cri.go:89] found id: ""
	I0814 17:40:19.882987   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.882997   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:19.883005   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:19.883063   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:19.920312   80228 cri.go:89] found id: ""
	I0814 17:40:19.920345   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.920356   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:19.920365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:19.920430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:19.953628   80228 cri.go:89] found id: ""
	I0814 17:40:19.953658   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.953671   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:19.953683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:19.953741   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:19.984998   80228 cri.go:89] found id: ""
	I0814 17:40:19.985028   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.985036   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:19.985043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:19.985092   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:20.018728   80228 cri.go:89] found id: ""
	I0814 17:40:20.018753   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.018761   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:20.018766   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:20.018814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:20.050718   80228 cri.go:89] found id: ""
	I0814 17:40:20.050743   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.050757   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:20.050765   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:20.050777   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:20.101567   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:20.101602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:20.114890   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:20.114920   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:20.183926   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:20.183948   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:20.183960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:20.270195   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:20.270223   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:18.513348   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.513633   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.445352   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.945704   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.260365   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:24.760475   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.807078   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:22.820187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:22.820260   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:22.852474   80228 cri.go:89] found id: ""
	I0814 17:40:22.852504   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.852514   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:22.852522   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:22.852596   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:22.887141   80228 cri.go:89] found id: ""
	I0814 17:40:22.887167   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.887177   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:22.887184   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:22.887248   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:22.919384   80228 cri.go:89] found id: ""
	I0814 17:40:22.919417   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.919428   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:22.919436   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:22.919502   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:22.951877   80228 cri.go:89] found id: ""
	I0814 17:40:22.951897   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.951905   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:22.951910   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:22.951965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:22.987712   80228 cri.go:89] found id: ""
	I0814 17:40:22.987742   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.987752   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:22.987760   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:22.987832   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:23.025562   80228 cri.go:89] found id: ""
	I0814 17:40:23.025597   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.025608   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:23.025616   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:23.025680   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:23.058928   80228 cri.go:89] found id: ""
	I0814 17:40:23.058955   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.058962   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:23.058969   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:23.059025   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:23.096807   80228 cri.go:89] found id: ""
	I0814 17:40:23.096836   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.096847   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:23.096858   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:23.096874   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:23.148943   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:23.148977   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:23.161905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:23.161927   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:23.232119   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:23.232147   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:23.232160   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:23.320693   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:23.320731   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:25.858506   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:25.871891   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:25.871964   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:25.904732   80228 cri.go:89] found id: ""
	I0814 17:40:25.904760   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.904769   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:25.904775   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:25.904830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:25.936317   80228 cri.go:89] found id: ""
	I0814 17:40:25.936347   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.936358   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:25.936365   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:25.936427   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:25.969921   80228 cri.go:89] found id: ""
	I0814 17:40:25.969946   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.969954   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:25.969960   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:25.970009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:26.022832   80228 cri.go:89] found id: ""
	I0814 17:40:26.022862   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.022872   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:26.022880   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:26.022941   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:26.056178   80228 cri.go:89] found id: ""
	I0814 17:40:26.056206   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.056214   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:26.056224   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:26.056275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:26.086921   80228 cri.go:89] found id: ""
	I0814 17:40:26.086955   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.086966   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:26.086974   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:26.087031   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:26.120631   80228 cri.go:89] found id: ""
	I0814 17:40:26.120665   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.120677   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:26.120686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:26.120745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:26.154258   80228 cri.go:89] found id: ""
	I0814 17:40:26.154289   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.154300   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:26.154310   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:26.154324   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:26.208366   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:26.208405   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:26.222160   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:26.222192   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:26.294737   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:26.294756   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:26.294768   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:22.513813   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:25.013707   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:25.444691   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:27.944277   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.945043   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:27.260184   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.262080   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:26.372870   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:26.372906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:28.908165   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:28.920754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:28.920816   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:28.953950   80228 cri.go:89] found id: ""
	I0814 17:40:28.953971   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.953978   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:28.953987   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:28.954035   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:28.985228   80228 cri.go:89] found id: ""
	I0814 17:40:28.985266   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.985278   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:28.985286   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:28.985347   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:29.016295   80228 cri.go:89] found id: ""
	I0814 17:40:29.016328   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.016336   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:29.016341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:29.016392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:29.048664   80228 cri.go:89] found id: ""
	I0814 17:40:29.048696   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.048707   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:29.048715   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:29.048778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:29.080441   80228 cri.go:89] found id: ""
	I0814 17:40:29.080466   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.080474   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:29.080520   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:29.080584   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:29.112377   80228 cri.go:89] found id: ""
	I0814 17:40:29.112407   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.112418   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:29.112426   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:29.112493   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:29.145368   80228 cri.go:89] found id: ""
	I0814 17:40:29.145395   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.145403   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:29.145409   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:29.145471   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:29.177305   80228 cri.go:89] found id: ""
	I0814 17:40:29.177333   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.177341   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:29.177350   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:29.177366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:29.232156   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:29.232197   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:29.245286   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:29.245317   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:29.322257   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:29.322286   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:29.322302   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:29.397679   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:29.397714   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:27.512862   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.514756   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.945087   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.444743   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.760242   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.259825   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.935264   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:31.948380   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:31.948446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:31.978898   80228 cri.go:89] found id: ""
	I0814 17:40:31.978925   80228 logs.go:276] 0 containers: []
	W0814 17:40:31.978932   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:31.978939   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:31.978989   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:32.010652   80228 cri.go:89] found id: ""
	I0814 17:40:32.010681   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.010692   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:32.010699   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:32.010767   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:32.044821   80228 cri.go:89] found id: ""
	I0814 17:40:32.044852   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.044860   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:32.044866   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:32.044915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:32.076359   80228 cri.go:89] found id: ""
	I0814 17:40:32.076388   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.076398   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:32.076406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:32.076469   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:32.107652   80228 cri.go:89] found id: ""
	I0814 17:40:32.107680   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.107692   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:32.107709   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:32.107770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:32.138445   80228 cri.go:89] found id: ""
	I0814 17:40:32.138473   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.138484   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:32.138492   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:32.138558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:32.173771   80228 cri.go:89] found id: ""
	I0814 17:40:32.173794   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.173802   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:32.173807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:32.173857   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:32.206387   80228 cri.go:89] found id: ""
	I0814 17:40:32.206418   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.206429   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:32.206441   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:32.206454   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.258114   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:32.258148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:32.271984   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:32.272009   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:32.335423   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:32.335447   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:32.335464   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:32.411155   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:32.411206   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:34.975280   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:34.988098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:34.988176   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:35.022020   80228 cri.go:89] found id: ""
	I0814 17:40:35.022047   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.022062   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:35.022071   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:35.022124   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:35.055528   80228 cri.go:89] found id: ""
	I0814 17:40:35.055568   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.055578   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:35.055586   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:35.055647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:35.088373   80228 cri.go:89] found id: ""
	I0814 17:40:35.088404   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.088415   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:35.088422   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:35.088489   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:35.123162   80228 cri.go:89] found id: ""
	I0814 17:40:35.123188   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.123198   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:35.123206   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:35.123268   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:35.160240   80228 cri.go:89] found id: ""
	I0814 17:40:35.160267   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.160277   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:35.160286   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:35.160348   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:35.196249   80228 cri.go:89] found id: ""
	I0814 17:40:35.196276   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.196285   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:35.196293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:35.196359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:35.232564   80228 cri.go:89] found id: ""
	I0814 17:40:35.232588   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.232598   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:35.232606   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:35.232671   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:35.267357   80228 cri.go:89] found id: ""
	I0814 17:40:35.267383   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.267392   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:35.267399   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:35.267412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:35.279779   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:35.279806   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:35.347748   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:35.347769   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:35.347782   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:35.427900   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:35.427932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:35.468925   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:35.468953   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.013942   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.513138   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:36.944749   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.444665   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:36.760292   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.260430   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:38.020581   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:38.034985   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:38.035066   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:38.070206   80228 cri.go:89] found id: ""
	I0814 17:40:38.070231   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.070240   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:38.070246   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:38.070294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:38.103859   80228 cri.go:89] found id: ""
	I0814 17:40:38.103885   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.103893   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:38.103898   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:38.103947   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:38.138247   80228 cri.go:89] found id: ""
	I0814 17:40:38.138271   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.138278   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:38.138285   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:38.138345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:38.179475   80228 cri.go:89] found id: ""
	I0814 17:40:38.179511   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.179520   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:38.179526   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:38.179578   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:38.224892   80228 cri.go:89] found id: ""
	I0814 17:40:38.224922   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.224932   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:38.224940   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:38.224996   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:38.270456   80228 cri.go:89] found id: ""
	I0814 17:40:38.270485   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.270497   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:38.270504   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:38.270569   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:38.305267   80228 cri.go:89] found id: ""
	I0814 17:40:38.305300   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.305308   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:38.305315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:38.305387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:38.336942   80228 cri.go:89] found id: ""
	I0814 17:40:38.336978   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.336989   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:38.337000   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:38.337016   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:38.388618   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:38.388651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:38.403442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:38.403472   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:38.478225   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:38.478256   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:38.478273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:38.553400   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:38.553440   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:41.089947   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:41.101989   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:41.102070   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:41.133743   80228 cri.go:89] found id: ""
	I0814 17:40:41.133767   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.133774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:41.133780   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:41.133828   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:41.169671   80228 cri.go:89] found id: ""
	I0814 17:40:41.169706   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.169714   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:41.169721   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:41.169773   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:41.203425   80228 cri.go:89] found id: ""
	I0814 17:40:41.203451   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.203459   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:41.203475   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:41.203534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:41.237031   80228 cri.go:89] found id: ""
	I0814 17:40:41.237064   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.237075   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:41.237084   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:41.237149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:41.271095   80228 cri.go:89] found id: ""
	I0814 17:40:41.271120   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.271128   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:41.271134   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:41.271190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:41.303640   80228 cri.go:89] found id: ""
	I0814 17:40:41.303672   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.303684   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:41.303692   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:41.303755   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:37.013555   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.013733   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.013910   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.943472   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:43.944582   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.261795   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:43.759672   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.336010   80228 cri.go:89] found id: ""
	I0814 17:40:41.336047   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.336062   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:41.336071   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:41.336140   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:41.370098   80228 cri.go:89] found id: ""
	I0814 17:40:41.370133   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.370143   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:41.370154   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:41.370168   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:41.420760   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:41.420794   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:41.433651   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:41.433678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:41.506623   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:41.506644   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:41.506657   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:41.591390   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:41.591426   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:44.130649   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:44.144362   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:44.144428   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:44.178485   80228 cri.go:89] found id: ""
	I0814 17:40:44.178516   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.178527   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:44.178535   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:44.178600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:44.214231   80228 cri.go:89] found id: ""
	I0814 17:40:44.214260   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.214268   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:44.214274   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:44.214336   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:44.248483   80228 cri.go:89] found id: ""
	I0814 17:40:44.248513   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.248524   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:44.248531   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:44.248600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:44.282445   80228 cri.go:89] found id: ""
	I0814 17:40:44.282472   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.282481   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:44.282493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:44.282560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:44.315141   80228 cri.go:89] found id: ""
	I0814 17:40:44.315169   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.315190   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:44.315198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:44.315259   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:44.346756   80228 cri.go:89] found id: ""
	I0814 17:40:44.346781   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.346789   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:44.346795   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:44.346853   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:44.378143   80228 cri.go:89] found id: ""
	I0814 17:40:44.378172   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.378183   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:44.378191   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:44.378255   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:44.411526   80228 cri.go:89] found id: ""
	I0814 17:40:44.411557   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.411567   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:44.411578   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:44.411592   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:44.459873   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:44.459913   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:44.473112   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:44.473148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:44.547514   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:44.547546   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:44.547579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:44.630377   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:44.630415   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:43.512113   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.512590   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.945080   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:47.946506   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.760626   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:48.260015   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.260186   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:47.173094   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:47.185854   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:47.185927   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:47.228755   80228 cri.go:89] found id: ""
	I0814 17:40:47.228781   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.228788   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:47.228795   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:47.228851   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:47.264986   80228 cri.go:89] found id: ""
	I0814 17:40:47.265020   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.265031   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:47.265037   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:47.265100   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:47.296900   80228 cri.go:89] found id: ""
	I0814 17:40:47.296929   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.296940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:47.296947   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:47.297009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:47.328120   80228 cri.go:89] found id: ""
	I0814 17:40:47.328147   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.328155   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:47.328161   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:47.328210   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:47.364147   80228 cri.go:89] found id: ""
	I0814 17:40:47.364171   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.364178   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:47.364184   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:47.364238   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:47.400466   80228 cri.go:89] found id: ""
	I0814 17:40:47.400493   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.400501   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:47.400507   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:47.400562   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:47.432681   80228 cri.go:89] found id: ""
	I0814 17:40:47.432713   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.432724   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:47.432732   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:47.432801   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:47.465466   80228 cri.go:89] found id: ""
	I0814 17:40:47.465498   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.465510   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:47.465522   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:47.465536   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:47.502076   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:47.502114   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:47.554451   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:47.554488   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:47.567658   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:47.567690   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:47.635805   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:47.635829   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:47.635844   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:50.215353   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:50.227723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:50.227795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:50.258250   80228 cri.go:89] found id: ""
	I0814 17:40:50.258276   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.258287   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:50.258296   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:50.258363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:50.291371   80228 cri.go:89] found id: ""
	I0814 17:40:50.291406   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.291416   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:50.291423   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:50.291479   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:50.321449   80228 cri.go:89] found id: ""
	I0814 17:40:50.321473   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.321481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:50.321486   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:50.321545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:50.351752   80228 cri.go:89] found id: ""
	I0814 17:40:50.351780   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.351791   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:50.351799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:50.351856   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:50.382022   80228 cri.go:89] found id: ""
	I0814 17:40:50.382050   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.382057   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:50.382063   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:50.382118   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:50.414057   80228 cri.go:89] found id: ""
	I0814 17:40:50.414083   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.414091   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:50.414098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:50.414156   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:50.447508   80228 cri.go:89] found id: ""
	I0814 17:40:50.447530   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.447537   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:50.447543   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:50.447606   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:50.487401   80228 cri.go:89] found id: ""
	I0814 17:40:50.487425   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.487434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:50.487442   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:50.487455   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:50.524404   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:50.524439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:50.578220   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:50.578256   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:50.591405   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:50.591431   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:50.657727   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:50.657750   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:50.657762   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:47.514490   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.012588   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.445363   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:52.944903   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:52.760728   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:54.760918   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:53.237985   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:53.250502   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:53.250572   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:53.285728   80228 cri.go:89] found id: ""
	I0814 17:40:53.285763   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.285774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:53.285784   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:53.285848   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:53.318195   80228 cri.go:89] found id: ""
	I0814 17:40:53.318231   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.318243   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:53.318252   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:53.318317   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:53.350259   80228 cri.go:89] found id: ""
	I0814 17:40:53.350291   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.350302   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:53.350310   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:53.350385   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:53.385894   80228 cri.go:89] found id: ""
	I0814 17:40:53.385920   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.385928   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:53.385934   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:53.385983   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:53.420851   80228 cri.go:89] found id: ""
	I0814 17:40:53.420878   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.420890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:53.420897   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:53.420963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:53.458332   80228 cri.go:89] found id: ""
	I0814 17:40:53.458370   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.458381   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:53.458392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:53.458465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:53.489719   80228 cri.go:89] found id: ""
	I0814 17:40:53.489750   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.489759   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:53.489765   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:53.489820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:53.522942   80228 cri.go:89] found id: ""
	I0814 17:40:53.522977   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.522988   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:53.522998   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:53.523013   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:53.599450   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:53.599492   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:53.637225   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:53.637254   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:53.688605   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:53.688647   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:53.704601   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:53.704633   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:53.775046   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.275201   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:56.288406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:56.288463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:52.013747   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:54.513735   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:56.514335   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:55.445462   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:57.447142   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:59.946025   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:57.261047   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:59.760136   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:56.322862   80228 cri.go:89] found id: ""
	I0814 17:40:56.322891   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.322899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:56.322905   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:56.322954   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:56.356214   80228 cri.go:89] found id: ""
	I0814 17:40:56.356243   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.356262   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:56.356268   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:56.356338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:56.388877   80228 cri.go:89] found id: ""
	I0814 17:40:56.388900   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.388909   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:56.388915   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:56.388967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:56.422552   80228 cri.go:89] found id: ""
	I0814 17:40:56.422577   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.422585   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:56.422590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:56.422649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:56.456995   80228 cri.go:89] found id: ""
	I0814 17:40:56.457018   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.457026   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:56.457031   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:56.457079   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:56.495745   80228 cri.go:89] found id: ""
	I0814 17:40:56.495772   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.495788   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:56.495797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:56.495868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:56.529139   80228 cri.go:89] found id: ""
	I0814 17:40:56.529171   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.529179   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:56.529185   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:56.529237   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:56.561377   80228 cri.go:89] found id: ""
	I0814 17:40:56.561406   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.561414   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:56.561424   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:56.561439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:56.601504   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:56.601537   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:56.653369   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:56.653403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:56.666117   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:56.666144   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:56.731921   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.731949   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:56.731963   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.315712   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:59.328425   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:59.328486   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:59.364056   80228 cri.go:89] found id: ""
	I0814 17:40:59.364080   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.364088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:59.364094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:59.364151   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:59.398948   80228 cri.go:89] found id: ""
	I0814 17:40:59.398971   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.398978   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:59.398984   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:59.399029   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:59.430301   80228 cri.go:89] found id: ""
	I0814 17:40:59.430327   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.430335   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:59.430341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:59.430406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:59.465278   80228 cri.go:89] found id: ""
	I0814 17:40:59.465301   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.465309   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:59.465315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:59.465372   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:59.497544   80228 cri.go:89] found id: ""
	I0814 17:40:59.497575   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.497586   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:59.497595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:59.497659   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:59.529463   80228 cri.go:89] found id: ""
	I0814 17:40:59.529494   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.529506   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:59.529513   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:59.529587   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:59.562448   80228 cri.go:89] found id: ""
	I0814 17:40:59.562477   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.562487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:59.562495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:59.562609   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:59.594059   80228 cri.go:89] found id: ""
	I0814 17:40:59.594089   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.594103   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:59.594112   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:59.594123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.672139   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:59.672172   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:59.710714   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:59.710743   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:59.762645   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:59.762676   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:59.776006   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:59.776033   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:59.838187   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:59.013030   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:01.013280   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.445595   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:04.944484   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.260244   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:04.760862   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.338964   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:02.351381   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:02.351460   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:02.383206   80228 cri.go:89] found id: ""
	I0814 17:41:02.383235   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.383244   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:02.383250   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:02.383310   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:02.417016   80228 cri.go:89] found id: ""
	I0814 17:41:02.417042   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.417049   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:02.417055   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:02.417111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:02.451936   80228 cri.go:89] found id: ""
	I0814 17:41:02.451964   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.451974   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:02.451982   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:02.452042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:02.489896   80228 cri.go:89] found id: ""
	I0814 17:41:02.489927   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.489937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:02.489945   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:02.490011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:02.524273   80228 cri.go:89] found id: ""
	I0814 17:41:02.524308   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.524339   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:02.524346   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:02.524409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:02.558813   80228 cri.go:89] found id: ""
	I0814 17:41:02.558842   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.558850   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:02.558861   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:02.558917   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:02.592704   80228 cri.go:89] found id: ""
	I0814 17:41:02.592733   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.592747   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:02.592753   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:02.592818   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:02.625250   80228 cri.go:89] found id: ""
	I0814 17:41:02.625277   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.625288   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:02.625299   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:02.625312   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:02.677577   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:02.677613   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:02.691407   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:02.691439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:02.756797   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:02.756869   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:02.756888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:02.830803   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:02.830842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.370085   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:05.385272   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:05.385342   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:05.421775   80228 cri.go:89] found id: ""
	I0814 17:41:05.421799   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.421806   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:05.421812   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:05.421860   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:05.457054   80228 cri.go:89] found id: ""
	I0814 17:41:05.457083   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.457093   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:05.457100   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:05.457153   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:05.489290   80228 cri.go:89] found id: ""
	I0814 17:41:05.489330   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.489338   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:05.489345   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:05.489392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:05.527066   80228 cri.go:89] found id: ""
	I0814 17:41:05.527091   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.527098   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:05.527105   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:05.527155   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:05.563882   80228 cri.go:89] found id: ""
	I0814 17:41:05.563915   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.563925   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:05.563931   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:05.563982   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:05.601837   80228 cri.go:89] found id: ""
	I0814 17:41:05.601863   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.601871   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:05.601879   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:05.601940   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:05.633503   80228 cri.go:89] found id: ""
	I0814 17:41:05.633531   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.633539   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:05.633545   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:05.633615   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:05.668281   80228 cri.go:89] found id: ""
	I0814 17:41:05.668312   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.668324   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:05.668335   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:05.668349   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:05.747214   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:05.747249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.784408   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:05.784441   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:05.835067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:05.835103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:05.847938   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:05.847966   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:05.917404   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:03.513033   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:05.514476   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:06.944595   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:08.944850   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:07.260430   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:09.762513   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:08.417559   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:08.431092   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:08.431165   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:08.465357   80228 cri.go:89] found id: ""
	I0814 17:41:08.465515   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.465543   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:08.465560   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:08.465675   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:08.499085   80228 cri.go:89] found id: ""
	I0814 17:41:08.499114   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.499123   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:08.499129   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:08.499180   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:08.533881   80228 cri.go:89] found id: ""
	I0814 17:41:08.533909   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.533917   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:08.533922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:08.533972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:08.570503   80228 cri.go:89] found id: ""
	I0814 17:41:08.570549   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.570560   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:08.570572   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:08.570649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:08.602557   80228 cri.go:89] found id: ""
	I0814 17:41:08.602599   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.602610   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:08.602691   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:08.602785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:08.636174   80228 cri.go:89] found id: ""
	I0814 17:41:08.636199   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.636206   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:08.636213   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:08.636261   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:08.672774   80228 cri.go:89] found id: ""
	I0814 17:41:08.672804   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.672815   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:08.672823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:08.672890   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:08.705535   80228 cri.go:89] found id: ""
	I0814 17:41:08.705590   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.705605   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:08.705622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:08.705641   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:08.744315   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:08.744341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:08.794632   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:08.794666   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:08.808089   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:08.808117   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:08.876417   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:08.876436   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:08.876452   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:08.013688   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:10.512639   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:11.444206   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:13.944056   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:12.260065   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:14.759640   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:11.458562   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:11.470905   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:11.470965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:11.505992   80228 cri.go:89] found id: ""
	I0814 17:41:11.506023   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.506036   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:11.506044   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:11.506112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:11.540893   80228 cri.go:89] found id: ""
	I0814 17:41:11.540922   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.540932   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:11.540945   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:11.541001   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:11.575423   80228 cri.go:89] found id: ""
	I0814 17:41:11.575448   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.575455   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:11.575462   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:11.575520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:11.608126   80228 cri.go:89] found id: ""
	I0814 17:41:11.608155   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.608164   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:11.608171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:11.608222   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:11.640165   80228 cri.go:89] found id: ""
	I0814 17:41:11.640190   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.640198   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:11.640204   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:11.640263   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:11.674425   80228 cri.go:89] found id: ""
	I0814 17:41:11.674446   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.674455   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:11.674460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:11.674513   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:11.707448   80228 cri.go:89] found id: ""
	I0814 17:41:11.707477   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.707487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:11.707493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:11.707555   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:11.744309   80228 cri.go:89] found id: ""
	I0814 17:41:11.744338   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.744346   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:11.744363   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:11.744375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:11.824165   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:11.824196   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:11.862013   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:11.862039   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:11.913862   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:11.913902   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:11.927147   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:11.927178   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:11.998403   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.498590   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:14.512847   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:14.512938   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:14.549255   80228 cri.go:89] found id: ""
	I0814 17:41:14.549288   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.549306   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:14.549316   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:14.549382   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:14.588917   80228 cri.go:89] found id: ""
	I0814 17:41:14.588948   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.588956   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:14.588963   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:14.589012   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:14.622581   80228 cri.go:89] found id: ""
	I0814 17:41:14.622611   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.622621   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:14.622628   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:14.622693   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:14.656029   80228 cri.go:89] found id: ""
	I0814 17:41:14.656056   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.656064   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:14.656070   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:14.656117   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:14.687502   80228 cri.go:89] found id: ""
	I0814 17:41:14.687527   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.687536   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:14.687541   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:14.687614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:14.720682   80228 cri.go:89] found id: ""
	I0814 17:41:14.720713   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.720721   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:14.720728   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:14.720778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:14.752482   80228 cri.go:89] found id: ""
	I0814 17:41:14.752511   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.752520   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:14.752525   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:14.752577   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:14.792980   80228 cri.go:89] found id: ""
	I0814 17:41:14.793004   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.793014   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:14.793026   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:14.793042   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:14.845259   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:14.845297   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:14.858530   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:14.858556   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:14.931025   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.931054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:14.931067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:15.008081   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:15.008115   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:13.014174   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:15.512768   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:16.444772   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:16.444802   79521 pod_ready.go:81] duration metric: took 4m0.006448573s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	E0814 17:41:16.444810   79521 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 17:41:16.444817   79521 pod_ready.go:38] duration metric: took 4m5.044051569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:41:16.444832   79521 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:41:16.444858   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:16.444901   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:16.499710   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:16.499742   79521 cri.go:89] found id: ""
	I0814 17:41:16.499751   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:16.499815   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.504467   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:16.504544   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:16.546815   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:16.546842   79521 cri.go:89] found id: ""
	I0814 17:41:16.546851   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:16.546905   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.550917   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:16.550986   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:16.590195   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:16.590216   79521 cri.go:89] found id: ""
	I0814 17:41:16.590224   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:16.590267   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.594123   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:16.594196   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:16.631058   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:16.631091   79521 cri.go:89] found id: ""
	I0814 17:41:16.631101   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:16.631163   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.635151   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:16.635226   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:16.671555   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:16.671582   79521 cri.go:89] found id: ""
	I0814 17:41:16.671592   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:16.671657   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.675790   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:16.675847   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:16.713131   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:16.713157   79521 cri.go:89] found id: ""
	I0814 17:41:16.713165   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:16.713217   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.717296   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:16.717354   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:16.756212   79521 cri.go:89] found id: ""
	I0814 17:41:16.756245   79521 logs.go:276] 0 containers: []
	W0814 17:41:16.756255   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:16.756261   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:16.756324   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:16.802379   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:16.802411   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:16.802417   79521 cri.go:89] found id: ""
	I0814 17:41:16.802431   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:16.802492   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.807105   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.811210   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:16.811241   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:16.852490   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:16.852526   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:16.894384   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:16.894425   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:16.929919   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:16.929949   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:16.965031   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:16.965061   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:17.468878   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.468945   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.482799   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.482826   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:17.610874   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:17.610904   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:17.649292   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:17.649322   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:17.691014   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:17.691045   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:17.749218   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.749254   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.794240   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.794280   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.868805   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:17.868851   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:16.760328   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:18.760369   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:17.544873   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:17.557699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:17.557791   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:17.600314   80228 cri.go:89] found id: ""
	I0814 17:41:17.600347   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.600360   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:17.600370   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:17.600441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:17.634873   80228 cri.go:89] found id: ""
	I0814 17:41:17.634902   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.634914   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:17.634923   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:17.634986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:17.670521   80228 cri.go:89] found id: ""
	I0814 17:41:17.670552   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.670563   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:17.670571   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:17.670647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:17.705587   80228 cri.go:89] found id: ""
	I0814 17:41:17.705612   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.705626   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:17.705632   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:17.705682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:17.768178   80228 cri.go:89] found id: ""
	I0814 17:41:17.768207   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.768218   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:17.768226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:17.768290   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:17.804692   80228 cri.go:89] found id: ""
	I0814 17:41:17.804721   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.804729   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:17.804735   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:17.804795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:17.847994   80228 cri.go:89] found id: ""
	I0814 17:41:17.848030   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.848041   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:17.848052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:17.848122   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:17.883905   80228 cri.go:89] found id: ""
	I0814 17:41:17.883935   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.883944   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:17.883953   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.883965   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.931481   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.931522   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.983315   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.983363   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.996941   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.996981   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:18.067254   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:18.067279   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:18.067295   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:20.642099   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.655941   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.656014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.692525   80228 cri.go:89] found id: ""
	I0814 17:41:20.692554   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.692565   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:20.692577   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.692634   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.727721   80228 cri.go:89] found id: ""
	I0814 17:41:20.727755   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.727769   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:20.727778   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.727845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.770441   80228 cri.go:89] found id: ""
	I0814 17:41:20.770471   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.770481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:20.770488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.770550   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.807932   80228 cri.go:89] found id: ""
	I0814 17:41:20.807961   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.807968   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:20.807975   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.808030   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.849919   80228 cri.go:89] found id: ""
	I0814 17:41:20.849944   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.849963   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:20.849970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.850045   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.887351   80228 cri.go:89] found id: ""
	I0814 17:41:20.887382   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.887393   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:20.887401   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.887465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.921284   80228 cri.go:89] found id: ""
	I0814 17:41:20.921310   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.921321   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.921328   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:20.921409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:20.955238   80228 cri.go:89] found id: ""
	I0814 17:41:20.955267   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.955278   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:20.955288   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.955314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:21.024544   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:21.024565   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.024579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:21.103987   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.104019   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.145515   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.145550   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.197307   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.197346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.514682   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:20.015152   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:20.429364   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.445075   79521 api_server.go:72] duration metric: took 4m16.759338748s to wait for apiserver process to appear ...
	I0814 17:41:20.445102   79521 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:41:20.445133   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.445179   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.477630   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:20.477655   79521 cri.go:89] found id: ""
	I0814 17:41:20.477663   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:20.477714   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.481667   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.481728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.514443   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:20.514465   79521 cri.go:89] found id: ""
	I0814 17:41:20.514473   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:20.514516   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.518344   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.518401   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.559625   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:20.559647   79521 cri.go:89] found id: ""
	I0814 17:41:20.559653   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:20.559706   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.564137   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.564203   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.603504   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:20.603531   79521 cri.go:89] found id: ""
	I0814 17:41:20.603540   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:20.603602   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.608260   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.608334   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.641466   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:20.641487   79521 cri.go:89] found id: ""
	I0814 17:41:20.641494   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:20.641538   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.645566   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.645625   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.685003   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:20.685032   79521 cri.go:89] found id: ""
	I0814 17:41:20.685042   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:20.685104   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.690347   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.690429   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.733753   79521 cri.go:89] found id: ""
	I0814 17:41:20.733782   79521 logs.go:276] 0 containers: []
	W0814 17:41:20.733793   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.733800   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:20.733862   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:20.781659   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:20.781683   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:20.781689   79521 cri.go:89] found id: ""
	I0814 17:41:20.781697   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:20.781753   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.786293   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.790358   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.790377   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:20.916473   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:20.916513   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:20.968706   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:20.968743   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:21.003507   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:21.003546   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:21.049909   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:21.049961   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:21.090052   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:21.090080   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:21.129551   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.129585   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.174792   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.174828   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.247392   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.247440   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:21.261095   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:21.261129   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:21.306583   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:21.306616   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:21.339602   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:21.339642   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:21.397695   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.397732   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.301807   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:41:24.306392   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 200:
	ok
	I0814 17:41:24.307364   79521 api_server.go:141] control plane version: v1.31.0
	I0814 17:41:24.307390   79521 api_server.go:131] duration metric: took 3.862280551s to wait for apiserver health ...
	I0814 17:41:24.307398   79521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:41:24.307418   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:24.307463   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:24.342519   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:24.342552   79521 cri.go:89] found id: ""
	I0814 17:41:24.342561   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:24.342627   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.346361   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:24.346422   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:24.386973   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:24.387001   79521 cri.go:89] found id: ""
	I0814 17:41:24.387012   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:24.387066   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.390942   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:24.390999   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:24.426841   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:24.426863   79521 cri.go:89] found id: ""
	I0814 17:41:24.426872   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:24.426927   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.430856   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:24.430917   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:24.467024   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:24.467050   79521 cri.go:89] found id: ""
	I0814 17:41:24.467059   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:24.467117   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.471659   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:24.471728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:24.506759   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:24.506786   79521 cri.go:89] found id: ""
	I0814 17:41:24.506799   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:24.506857   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.511660   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:24.511728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:24.547768   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:24.547795   79521 cri.go:89] found id: ""
	I0814 17:41:24.547805   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:24.547862   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.552881   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:24.552941   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:24.588519   79521 cri.go:89] found id: ""
	I0814 17:41:24.588544   79521 logs.go:276] 0 containers: []
	W0814 17:41:24.588551   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:24.588557   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:24.588602   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:24.624604   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:24.624626   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:24.624630   79521 cri.go:89] found id: ""
	I0814 17:41:24.624636   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:24.624691   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.628703   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.632611   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:24.632636   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:24.671903   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:24.671935   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:24.709821   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.709851   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:25.107477   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:25.107515   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:25.221012   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:25.221041   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:20.760924   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:23.259780   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.260347   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:23.712584   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:23.726467   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:23.726545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:23.762871   80228 cri.go:89] found id: ""
	I0814 17:41:23.762906   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.762916   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:23.762922   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:23.762972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:23.800068   80228 cri.go:89] found id: ""
	I0814 17:41:23.800096   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.800105   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:23.800113   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:23.800173   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:23.834913   80228 cri.go:89] found id: ""
	I0814 17:41:23.834945   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.834956   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:23.834963   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:23.835022   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:23.871196   80228 cri.go:89] found id: ""
	I0814 17:41:23.871222   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.871233   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:23.871240   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:23.871294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:23.907830   80228 cri.go:89] found id: ""
	I0814 17:41:23.907854   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.907862   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:23.907868   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:23.907926   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:23.941110   80228 cri.go:89] found id: ""
	I0814 17:41:23.941133   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.941141   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:23.941146   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:23.941197   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:23.973602   80228 cri.go:89] found id: ""
	I0814 17:41:23.973631   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.973649   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:23.973655   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:23.973710   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:24.007398   80228 cri.go:89] found id: ""
	I0814 17:41:24.007436   80228 logs.go:276] 0 containers: []
	W0814 17:41:24.007450   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:24.007462   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:24.007478   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:24.061830   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:24.061867   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:24.075012   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:24.075046   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:24.148666   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:24.148692   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.148703   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.230208   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:24.230248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:22.513616   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.013383   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.272397   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:25.272429   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:25.317574   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:25.317603   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:25.352239   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:25.352271   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:25.409997   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:25.410030   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:25.443875   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:25.443899   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:25.490987   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:25.491023   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:25.563495   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:25.563531   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:25.577305   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:25.577345   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:28.147823   79521 system_pods.go:59] 8 kube-system pods found
	I0814 17:41:28.147855   79521 system_pods.go:61] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running
	I0814 17:41:28.147860   79521 system_pods.go:61] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running
	I0814 17:41:28.147864   79521 system_pods.go:61] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running
	I0814 17:41:28.147867   79521 system_pods.go:61] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running
	I0814 17:41:28.147870   79521 system_pods.go:61] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running
	I0814 17:41:28.147874   79521 system_pods.go:61] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running
	I0814 17:41:28.147879   79521 system_pods.go:61] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:41:28.147883   79521 system_pods.go:61] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running
	I0814 17:41:28.147890   79521 system_pods.go:74] duration metric: took 3.840486938s to wait for pod list to return data ...
	I0814 17:41:28.147898   79521 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:41:28.150377   79521 default_sa.go:45] found service account: "default"
	I0814 17:41:28.150398   79521 default_sa.go:55] duration metric: took 2.493777ms for default service account to be created ...
	I0814 17:41:28.150406   79521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:41:28.154470   79521 system_pods.go:86] 8 kube-system pods found
	I0814 17:41:28.154494   79521 system_pods.go:89] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running
	I0814 17:41:28.154500   79521 system_pods.go:89] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running
	I0814 17:41:28.154504   79521 system_pods.go:89] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running
	I0814 17:41:28.154510   79521 system_pods.go:89] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running
	I0814 17:41:28.154514   79521 system_pods.go:89] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running
	I0814 17:41:28.154519   79521 system_pods.go:89] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running
	I0814 17:41:28.154525   79521 system_pods.go:89] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:41:28.154530   79521 system_pods.go:89] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running
	I0814 17:41:28.154537   79521 system_pods.go:126] duration metric: took 4.125964ms to wait for k8s-apps to be running ...
	I0814 17:41:28.154544   79521 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:41:28.154585   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:28.170494   79521 system_svc.go:56] duration metric: took 15.940728ms WaitForService to wait for kubelet
	I0814 17:41:28.170524   79521 kubeadm.go:582] duration metric: took 4m24.484791018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:41:28.170545   79521 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:41:28.173368   79521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:41:28.173395   79521 node_conditions.go:123] node cpu capacity is 2
	I0814 17:41:28.173407   79521 node_conditions.go:105] duration metric: took 2.858344ms to run NodePressure ...
	I0814 17:41:28.173417   79521 start.go:241] waiting for startup goroutines ...
	I0814 17:41:28.173424   79521 start.go:246] waiting for cluster config update ...
	I0814 17:41:28.173435   79521 start.go:255] writing updated cluster config ...
	I0814 17:41:28.173730   79521 ssh_runner.go:195] Run: rm -f paused
	I0814 17:41:28.219460   79521 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:41:28.221461   79521 out.go:177] * Done! kubectl is now configured to use "embed-certs-309673" cluster and "default" namespace by default
	I0814 17:41:27.761580   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:30.260454   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:26.776204   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:26.789057   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:26.789132   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:26.822531   80228 cri.go:89] found id: ""
	I0814 17:41:26.822564   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.822575   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:26.822590   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:26.822651   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:26.855314   80228 cri.go:89] found id: ""
	I0814 17:41:26.855353   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.855365   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:26.855372   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:26.855434   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:26.889389   80228 cri.go:89] found id: ""
	I0814 17:41:26.889413   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.889421   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:26.889427   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:26.889485   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:26.925478   80228 cri.go:89] found id: ""
	I0814 17:41:26.925500   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.925508   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:26.925514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:26.925560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:26.957012   80228 cri.go:89] found id: ""
	I0814 17:41:26.957042   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.957053   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:26.957061   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:26.957114   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:26.989358   80228 cri.go:89] found id: ""
	I0814 17:41:26.989388   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.989399   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:26.989406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:26.989468   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:27.024761   80228 cri.go:89] found id: ""
	I0814 17:41:27.024786   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.024805   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:27.024830   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:27.024895   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:27.059172   80228 cri.go:89] found id: ""
	I0814 17:41:27.059204   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.059215   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:27.059226   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:27.059240   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.096123   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:27.096151   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:27.147689   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:27.147719   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:27.161454   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:27.161483   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:27.234644   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:27.234668   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:27.234680   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:29.817428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:29.831731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:29.831811   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:29.868531   80228 cri.go:89] found id: ""
	I0814 17:41:29.868567   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.868577   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:29.868585   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:29.868657   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:29.913578   80228 cri.go:89] found id: ""
	I0814 17:41:29.913602   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.913611   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:29.913617   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:29.913677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:29.963916   80228 cri.go:89] found id: ""
	I0814 17:41:29.963939   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.963946   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:29.963952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:29.964011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:30.016735   80228 cri.go:89] found id: ""
	I0814 17:41:30.016763   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.016773   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:30.016781   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:30.016841   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:30.048852   80228 cri.go:89] found id: ""
	I0814 17:41:30.048880   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.048890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:30.048898   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:30.048960   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:30.080291   80228 cri.go:89] found id: ""
	I0814 17:41:30.080324   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.080335   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:30.080343   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:30.080506   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:30.113876   80228 cri.go:89] found id: ""
	I0814 17:41:30.113904   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.113914   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:30.113921   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:30.113984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:30.147568   80228 cri.go:89] found id: ""
	I0814 17:41:30.147594   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.147604   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:30.147614   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:30.147627   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:30.197596   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:30.197630   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:30.210576   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:30.210602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:30.277711   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:30.277731   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:30.277746   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:30.356556   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:30.356590   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.013699   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:29.014020   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:31.512974   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:32.760328   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:35.254066   79871 pod_ready.go:81] duration metric: took 4m0.000392709s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" ...
	E0814 17:41:35.254095   79871 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 17:41:35.254112   79871 pod_ready.go:38] duration metric: took 4m12.044429915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:41:35.254137   79871 kubeadm.go:597] duration metric: took 4m20.041916203s to restartPrimaryControlPlane
	W0814 17:41:35.254189   79871 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:35.254218   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:32.892697   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:32.909435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:32.909497   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:32.945055   80228 cri.go:89] found id: ""
	I0814 17:41:32.945080   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.945088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:32.945094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:32.945150   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:32.979266   80228 cri.go:89] found id: ""
	I0814 17:41:32.979294   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.979305   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:32.979312   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:32.979398   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:33.014260   80228 cri.go:89] found id: ""
	I0814 17:41:33.014286   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.014294   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:33.014299   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:33.014351   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:33.047590   80228 cri.go:89] found id: ""
	I0814 17:41:33.047622   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.047633   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:33.047646   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:33.047711   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:33.081258   80228 cri.go:89] found id: ""
	I0814 17:41:33.081294   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.081328   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:33.081337   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:33.081403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:33.112209   80228 cri.go:89] found id: ""
	I0814 17:41:33.112237   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.112247   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:33.112254   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:33.112318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:33.143854   80228 cri.go:89] found id: ""
	I0814 17:41:33.143892   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.143904   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:33.143913   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:33.143977   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:33.175147   80228 cri.go:89] found id: ""
	I0814 17:41:33.175190   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.175201   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:33.175212   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:33.175226   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:33.212877   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:33.212908   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:33.268067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:33.268103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:33.281357   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:33.281386   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:33.350233   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:33.350257   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:33.350269   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:35.929498   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:35.942290   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:35.942354   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:35.975782   80228 cri.go:89] found id: ""
	I0814 17:41:35.975809   80228 logs.go:276] 0 containers: []
	W0814 17:41:35.975818   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:35.975826   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:35.975886   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:36.008165   80228 cri.go:89] found id: ""
	I0814 17:41:36.008191   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.008200   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:36.008206   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:36.008262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:36.044912   80228 cri.go:89] found id: ""
	I0814 17:41:36.044937   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.044945   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:36.044954   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:36.045002   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:36.078068   80228 cri.go:89] found id: ""
	I0814 17:41:36.078096   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.078108   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:36.078116   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:36.078179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:36.110429   80228 cri.go:89] found id: ""
	I0814 17:41:36.110456   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.110467   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:36.110480   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:36.110540   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:36.142086   80228 cri.go:89] found id: ""
	I0814 17:41:36.142111   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.142119   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:36.142125   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:36.142186   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:36.172738   80228 cri.go:89] found id: ""
	I0814 17:41:36.172761   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.172769   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:36.172775   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:36.172831   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:36.204345   80228 cri.go:89] found id: ""
	I0814 17:41:36.204368   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.204376   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:36.204388   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:36.204403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:36.216667   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:36.216689   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:36.279509   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:36.279528   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:36.279540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:33.513591   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:36.013400   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:36.360411   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:36.360447   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:36.398193   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:36.398230   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:38.952415   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:38.968484   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:38.968554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:39.002450   80228 cri.go:89] found id: ""
	I0814 17:41:39.002479   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.002486   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:39.002493   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:39.002551   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:39.035840   80228 cri.go:89] found id: ""
	I0814 17:41:39.035868   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.035876   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:39.035882   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:39.035934   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:39.069900   80228 cri.go:89] found id: ""
	I0814 17:41:39.069929   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.069940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:39.069946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:39.069999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:39.104657   80228 cri.go:89] found id: ""
	I0814 17:41:39.104681   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.104689   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:39.104695   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:39.104751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:39.137279   80228 cri.go:89] found id: ""
	I0814 17:41:39.137312   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.137322   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:39.137330   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:39.137403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:39.170377   80228 cri.go:89] found id: ""
	I0814 17:41:39.170414   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.170424   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:39.170430   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:39.170491   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:39.205742   80228 cri.go:89] found id: ""
	I0814 17:41:39.205779   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.205790   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:39.205796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:39.205850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:39.239954   80228 cri.go:89] found id: ""
	I0814 17:41:39.239979   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.239987   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:39.239994   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:39.240011   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:39.276587   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:39.276619   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:39.329286   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:39.329322   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:39.342232   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:39.342257   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:39.411043   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:39.411063   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:39.411075   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:38.013562   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:40.013740   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:41.994479   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:42.007736   80228 kubeadm.go:597] duration metric: took 4m4.488869114s to restartPrimaryControlPlane
	W0814 17:41:42.007822   80228 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:42.007871   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:42.513259   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:45.013455   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:46.541593   80228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.533697889s)
	I0814 17:41:46.541676   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:46.556181   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:41:46.565943   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:41:46.575481   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:41:46.575501   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:41:46.575549   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:41:46.585143   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:41:46.585202   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:41:46.595157   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:41:46.604539   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:41:46.604600   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:41:46.613345   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.622186   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:41:46.622242   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.631221   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:41:46.640649   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:41:46.640706   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:41:46.650161   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:41:46.724104   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:41:46.724182   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:41:46.860463   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:41:46.860606   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:41:46.860725   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:41:47.036697   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:41:47.038444   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:41:47.038561   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:41:47.038670   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:41:47.038775   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:41:47.038860   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:41:47.038973   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:41:47.039067   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:41:47.039172   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:41:47.039256   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:41:47.039359   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:41:47.039456   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:41:47.039516   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:41:47.039587   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:41:47.278696   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:41:47.664300   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:41:47.988137   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:41:48.076560   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:41:48.093447   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:41:48.094656   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:41:48.094793   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:41:48.253225   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:41:48.255034   80228 out.go:204]   - Booting up control plane ...
	I0814 17:41:48.255160   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:41:48.259041   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:41:48.260074   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:41:48.260862   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:41:48.262910   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:41:47.513415   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:50.012937   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:52.013499   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:54.514150   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:57.013146   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:59.013393   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:01.014185   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:01.441261   79871 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.187019598s)
	I0814 17:42:01.441333   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:01.457213   79871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:42:01.466802   79871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:42:01.475719   79871 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:42:01.475736   79871 kubeadm.go:157] found existing configuration files:
	
	I0814 17:42:01.475784   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 17:42:01.484555   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:42:01.484618   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:42:01.493956   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 17:42:01.503873   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:42:01.503923   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:42:01.514710   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 17:42:01.524473   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:42:01.524531   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:42:01.534749   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 17:42:01.544491   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:42:01.544558   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:42:01.555481   79871 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:42:01.599801   79871 kubeadm.go:310] W0814 17:42:01.575622    2598 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:01.600615   79871 kubeadm.go:310] W0814 17:42:01.576625    2598 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:01.703064   79871 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:42:03.513007   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:05.514241   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:09.627141   79871 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:42:09.627216   79871 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:42:09.627344   79871 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:42:09.627480   79871 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:42:09.627638   79871 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:42:09.627717   79871 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:42:09.629272   79871 out.go:204]   - Generating certificates and keys ...
	I0814 17:42:09.629370   79871 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:42:09.629472   79871 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:42:09.629592   79871 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:42:09.629712   79871 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:42:09.629780   79871 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:42:09.629826   79871 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:42:09.629898   79871 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:42:09.629963   79871 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:42:09.630076   79871 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:42:09.630198   79871 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:42:09.630253   79871 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:42:09.630314   79871 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:42:09.630357   79871 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:42:09.630412   79871 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:42:09.630457   79871 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:42:09.630509   79871 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:42:09.630560   79871 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:42:09.630629   79871 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:42:09.630688   79871 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:42:09.632664   79871 out.go:204]   - Booting up control plane ...
	I0814 17:42:09.632763   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:42:09.632878   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:42:09.632963   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:42:09.633100   79871 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:42:09.633207   79871 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:42:09.633252   79871 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:42:09.633412   79871 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:42:09.633542   79871 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:42:09.633624   79871 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004125702s
	I0814 17:42:09.633727   79871 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:42:09.633814   79871 kubeadm.go:310] [api-check] The API server is healthy after 4.501648596s
	I0814 17:42:09.633967   79871 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:42:09.634119   79871 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:42:09.634169   79871 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:42:09.634328   79871 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-885666 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:42:09.634400   79871 kubeadm.go:310] [bootstrap-token] Using token: 17ct2j.hazurgskaspe26qx
	I0814 17:42:09.635732   79871 out.go:204]   - Configuring RBAC rules ...
	I0814 17:42:09.635859   79871 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:42:09.635990   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:42:09.636141   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:42:09.636250   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:42:09.636347   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:42:09.636485   79871 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:42:09.636657   79871 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:42:09.636708   79871 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:42:09.636747   79871 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:42:09.636753   79871 kubeadm.go:310] 
	I0814 17:42:09.636813   79871 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:42:09.636835   79871 kubeadm.go:310] 
	I0814 17:42:09.636972   79871 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:42:09.636995   79871 kubeadm.go:310] 
	I0814 17:42:09.637029   79871 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:42:09.637120   79871 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:42:09.637185   79871 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:42:09.637195   79871 kubeadm.go:310] 
	I0814 17:42:09.637267   79871 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:42:09.637277   79871 kubeadm.go:310] 
	I0814 17:42:09.637315   79871 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:42:09.637321   79871 kubeadm.go:310] 
	I0814 17:42:09.637384   79871 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:42:09.637461   79871 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:42:09.637523   79871 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:42:09.637529   79871 kubeadm.go:310] 
	I0814 17:42:09.637623   79871 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:42:09.637691   79871 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:42:09.637703   79871 kubeadm.go:310] 
	I0814 17:42:09.637779   79871 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 17ct2j.hazurgskaspe26qx \
	I0814 17:42:09.637866   79871 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:42:09.637890   79871 kubeadm.go:310] 	--control-plane 
	I0814 17:42:09.637899   79871 kubeadm.go:310] 
	I0814 17:42:09.638010   79871 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:42:09.638020   79871 kubeadm.go:310] 
	I0814 17:42:09.638098   79871 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 17ct2j.hazurgskaspe26qx \
	I0814 17:42:09.638211   79871 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:42:09.638234   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:42:09.638246   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:42:09.639748   79871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:42:09.641031   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:42:09.652173   79871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:42:09.670482   79871 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:42:09.670582   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:09.670582   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-885666 minikube.k8s.io/updated_at=2024_08_14T17_42_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=default-k8s-diff-port-885666 minikube.k8s.io/primary=true
	I0814 17:42:09.703097   79871 ops.go:34] apiserver oom_adj: -16
	I0814 17:42:09.881340   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:10.381470   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:07.516539   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:10.015456   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:10.882013   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:11.382239   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:11.881638   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:12.381703   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:12.881401   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:13.381524   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:13.881402   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:14.381696   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:14.498441   79871 kubeadm.go:1113] duration metric: took 4.827929439s to wait for elevateKubeSystemPrivileges
	I0814 17:42:14.498474   79871 kubeadm.go:394] duration metric: took 4m59.336328921s to StartCluster
	I0814 17:42:14.498493   79871 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:42:14.498581   79871 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:42:14.501029   79871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:42:14.501309   79871 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:42:14.501432   79871 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:42:14.501508   79871 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501541   79871 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.501550   79871 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:42:14.501577   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.501590   79871 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:42:14.501619   79871 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501667   79871 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.501677   79871 addons.go:243] addon metrics-server should already be in state true
	I0814 17:42:14.501716   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.501593   79871 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501840   79871 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-885666"
	I0814 17:42:14.502106   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502142   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502160   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502174   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502176   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502199   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502371   79871 out.go:177] * Verifying Kubernetes components...
	I0814 17:42:14.504085   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:42:14.519401   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0814 17:42:14.519631   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0814 17:42:14.520085   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.520196   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.520701   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.520722   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.520789   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0814 17:42:14.520978   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.520994   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.521255   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.521519   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.521524   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.521703   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.522021   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.522051   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.522548   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.522572   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.522864   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.523507   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.523550   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.525737   79871 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.525759   79871 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:42:14.525789   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.526144   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.526170   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.538930   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0814 17:42:14.538995   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42207
	I0814 17:42:14.539567   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.539594   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.540125   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.540138   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.540266   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.540289   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.540624   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.540770   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.540825   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.540970   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.542540   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.542848   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.544439   79871 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:42:14.544444   79871 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:42:14.544881   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0814 17:42:14.545315   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.545575   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:42:14.545586   79871 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:42:14.545601   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.545672   79871 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:42:14.545691   79871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:42:14.545708   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.545750   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.545759   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.546339   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.547167   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.547290   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.549794   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.549829   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550300   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.550324   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.550423   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550637   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.550707   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.550965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.551025   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.551119   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.551168   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.551302   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.551658   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.567203   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37661
	I0814 17:42:14.567613   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.568141   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.568167   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.568484   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.568678   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.570337   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.570867   79871 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:42:14.570888   79871 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:42:14.570906   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.574091   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.574562   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.574586   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.574667   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.574857   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.575039   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.575197   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.675594   79871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:42:14.694520   79871 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-885666" to be "Ready" ...
	I0814 17:42:14.702650   79871 node_ready.go:49] node "default-k8s-diff-port-885666" has status "Ready":"True"
	I0814 17:42:14.702672   79871 node_ready.go:38] duration metric: took 8.119351ms for node "default-k8s-diff-port-885666" to be "Ready" ...
	I0814 17:42:14.702684   79871 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:14.707535   79871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:14.762686   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:42:14.805275   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:42:14.837118   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:42:14.837143   79871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:42:14.881848   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:42:14.881872   79871 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:42:14.902731   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.902762   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.903058   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.903076   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.903092   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.903111   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.903448   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:14.903484   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.903493   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.908980   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.908995   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.909239   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:14.909310   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.909336   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.920224   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:42:14.920249   79871 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:42:14.955256   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:42:15.297167   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.297190   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.297544   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:15.297602   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.297631   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.297649   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.297659   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.297865   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.297885   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.842348   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.842376   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.842688   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.842707   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.842716   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.842738   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:15.842805   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.843057   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.843070   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.843081   79871 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-885666"
	I0814 17:42:15.844747   79871 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0814 17:42:12.513055   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:14.514298   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:15.845895   79871 addons.go:510] duration metric: took 1.344461878s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0814 17:42:16.714014   79871 pod_ready.go:102] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:18.715243   79871 pod_ready.go:102] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:17.013231   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:19.013966   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:20.507978   79367 pod_ready.go:81] duration metric: took 4m0.001138158s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" ...
	E0814 17:42:20.508026   79367 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 17:42:20.508048   79367 pod_ready.go:38] duration metric: took 4m6.305785273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:20.508081   79367 kubeadm.go:597] duration metric: took 4m13.455842043s to restartPrimaryControlPlane
	W0814 17:42:20.508145   79367 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:42:20.508186   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:42:20.714660   79871 pod_ready.go:92] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.714687   79871 pod_ready.go:81] duration metric: took 6.007129076s for pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.714696   79871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.719517   79871 pod_ready.go:92] pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.719542   79871 pod_ready.go:81] duration metric: took 4.838754ms for pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.719554   79871 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.724787   79871 pod_ready.go:92] pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.724816   79871 pod_ready.go:81] duration metric: took 5.250194ms for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.724834   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.731431   79871 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.731456   79871 pod_ready.go:81] duration metric: took 1.00661383s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.731468   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.736442   79871 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.736467   79871 pod_ready.go:81] duration metric: took 4.989787ms for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.736480   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-254cb" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.911495   79871 pod_ready.go:92] pod "kube-proxy-254cb" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.911520   79871 pod_ready.go:81] duration metric: took 175.03218ms for pod "kube-proxy-254cb" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.911529   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:22.311700   79871 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:22.311730   79871 pod_ready.go:81] duration metric: took 400.194781ms for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:22.311739   79871 pod_ready.go:38] duration metric: took 7.609043377s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:22.311752   79871 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:42:22.311800   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:42:22.326995   79871 api_server.go:72] duration metric: took 7.825649112s to wait for apiserver process to appear ...
	I0814 17:42:22.327018   79871 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:42:22.327036   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:42:22.331069   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 200:
	ok
	I0814 17:42:22.332077   79871 api_server.go:141] control plane version: v1.31.0
	I0814 17:42:22.332096   79871 api_server.go:131] duration metric: took 5.0724ms to wait for apiserver health ...
	I0814 17:42:22.332103   79871 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:42:22.514565   79871 system_pods.go:59] 9 kube-system pods found
	I0814 17:42:22.514595   79871 system_pods.go:61] "coredns-6f6b679f8f-k5qnj" [cf05f7e2-29de-4437-b182-53cd65350631] Running
	I0814 17:42:22.514601   79871 system_pods.go:61] "coredns-6f6b679f8f-nm28w" [ba1fe4d0-1869-49ec-a281-18119a2ad26b] Running
	I0814 17:42:22.514606   79871 system_pods.go:61] "etcd-default-k8s-diff-port-885666" [62581194-9ace-41f9-ba0d-0df04b7dca41] Running
	I0814 17:42:22.514610   79871 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-885666" [ea586a7b-5ca4-48d7-8be3-c13770e0cb40] Running
	I0814 17:42:22.514614   79871 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-885666" [9610bcca-feef-45f2-8b36-a6e96d364e18] Running
	I0814 17:42:22.514617   79871 system_pods.go:61] "kube-proxy-254cb" [e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb] Running
	I0814 17:42:22.514620   79871 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-885666" [872997ac-b438-4be5-b187-af171228770c] Running
	I0814 17:42:22.514626   79871 system_pods.go:61] "metrics-server-6867b74b74-5q86r" [849df692-9f8e-455e-b209-25801151513b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:42:22.514631   79871 system_pods.go:61] "storage-provisioner" [5128eea6-234c-4aea-a9b7-835e840a31a3] Running
	I0814 17:42:22.514639   79871 system_pods.go:74] duration metric: took 182.531543ms to wait for pod list to return data ...
	I0814 17:42:22.514647   79871 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:42:22.713101   79871 default_sa.go:45] found service account: "default"
	I0814 17:42:22.713125   79871 default_sa.go:55] duration metric: took 198.471814ms for default service account to be created ...
	I0814 17:42:22.713136   79871 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:42:22.914576   79871 system_pods.go:86] 9 kube-system pods found
	I0814 17:42:22.914619   79871 system_pods.go:89] "coredns-6f6b679f8f-k5qnj" [cf05f7e2-29de-4437-b182-53cd65350631] Running
	I0814 17:42:22.914628   79871 system_pods.go:89] "coredns-6f6b679f8f-nm28w" [ba1fe4d0-1869-49ec-a281-18119a2ad26b] Running
	I0814 17:42:22.914635   79871 system_pods.go:89] "etcd-default-k8s-diff-port-885666" [62581194-9ace-41f9-ba0d-0df04b7dca41] Running
	I0814 17:42:22.914643   79871 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-885666" [ea586a7b-5ca4-48d7-8be3-c13770e0cb40] Running
	I0814 17:42:22.914650   79871 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-885666" [9610bcca-feef-45f2-8b36-a6e96d364e18] Running
	I0814 17:42:22.914657   79871 system_pods.go:89] "kube-proxy-254cb" [e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb] Running
	I0814 17:42:22.914665   79871 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-885666" [872997ac-b438-4be5-b187-af171228770c] Running
	I0814 17:42:22.914678   79871 system_pods.go:89] "metrics-server-6867b74b74-5q86r" [849df692-9f8e-455e-b209-25801151513b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:42:22.914689   79871 system_pods.go:89] "storage-provisioner" [5128eea6-234c-4aea-a9b7-835e840a31a3] Running
	I0814 17:42:22.914705   79871 system_pods.go:126] duration metric: took 201.563199ms to wait for k8s-apps to be running ...
	I0814 17:42:22.914716   79871 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:42:22.914768   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:22.928499   79871 system_svc.go:56] duration metric: took 13.774119ms WaitForService to wait for kubelet
	I0814 17:42:22.928525   79871 kubeadm.go:582] duration metric: took 8.427183796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:42:22.928543   79871 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:42:23.112363   79871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:42:23.112398   79871 node_conditions.go:123] node cpu capacity is 2
	I0814 17:42:23.112410   79871 node_conditions.go:105] duration metric: took 183.861382ms to run NodePressure ...
	I0814 17:42:23.112423   79871 start.go:241] waiting for startup goroutines ...
	I0814 17:42:23.112432   79871 start.go:246] waiting for cluster config update ...
	I0814 17:42:23.112446   79871 start.go:255] writing updated cluster config ...
	I0814 17:42:23.112792   79871 ssh_runner.go:195] Run: rm -f paused
	I0814 17:42:23.162700   79871 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:42:23.164689   79871 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-885666" cluster and "default" namespace by default
	I0814 17:42:28.263217   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:42:28.263629   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:28.263853   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:33.264169   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:33.264403   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:43.264648   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:43.264858   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:46.859569   79367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.351355314s)
	I0814 17:42:46.859653   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:46.875530   79367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:42:46.884772   79367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:42:46.894185   79367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:42:46.894208   79367 kubeadm.go:157] found existing configuration files:
	
	I0814 17:42:46.894258   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:42:46.903690   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:42:46.903748   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:42:46.913277   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:42:46.922120   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:42:46.922173   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:42:46.931143   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:42:46.939936   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:42:46.939997   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:42:46.949257   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:42:46.958109   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:42:46.958169   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:42:46.967609   79367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:42:47.010119   79367 kubeadm.go:310] W0814 17:42:46.983769    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:47.010889   79367 kubeadm.go:310] W0814 17:42:46.984558    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:47.122746   79367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:42:55.571963   79367 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:42:55.572017   79367 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:42:55.572127   79367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:42:55.572236   79367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:42:55.572314   79367 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:42:55.572385   79367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:42:55.574178   79367 out.go:204]   - Generating certificates and keys ...
	I0814 17:42:55.574288   79367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:42:55.574372   79367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:42:55.574485   79367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:42:55.574573   79367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:42:55.574669   79367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:42:55.574740   79367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:42:55.574811   79367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:42:55.574909   79367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:42:55.575014   79367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:42:55.575135   79367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:42:55.575187   79367 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:42:55.575238   79367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:42:55.575288   79367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:42:55.575359   79367 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:42:55.575438   79367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:42:55.575521   79367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:42:55.575608   79367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:42:55.575759   79367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:42:55.575869   79367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:42:55.577331   79367 out.go:204]   - Booting up control plane ...
	I0814 17:42:55.577429   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:42:55.577511   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:42:55.577587   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:42:55.577773   79367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:42:55.577908   79367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:42:55.577968   79367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:42:55.578152   79367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:42:55.578301   79367 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:42:55.578368   79367 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.938552ms
	I0814 17:42:55.578428   79367 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:42:55.578480   79367 kubeadm.go:310] [api-check] The API server is healthy after 5.00239154s
	I0814 17:42:55.578605   79367 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:42:55.578777   79367 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:42:55.578863   79367 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:42:55.579025   79367 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-545149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:42:55.579100   79367 kubeadm.go:310] [bootstrap-token] Using token: qzd0yh.k8a8j7f6vmqndeav
	I0814 17:42:55.580318   79367 out.go:204]   - Configuring RBAC rules ...
	I0814 17:42:55.580429   79367 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:42:55.580503   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:42:55.580683   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:42:55.580839   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:42:55.580935   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:42:55.581047   79367 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:42:55.581197   79367 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:42:55.581235   79367 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:42:55.581279   79367 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:42:55.581285   79367 kubeadm.go:310] 
	I0814 17:42:55.581339   79367 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:42:55.581355   79367 kubeadm.go:310] 
	I0814 17:42:55.581470   79367 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:42:55.581480   79367 kubeadm.go:310] 
	I0814 17:42:55.581519   79367 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:42:55.581586   79367 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:42:55.581654   79367 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:42:55.581663   79367 kubeadm.go:310] 
	I0814 17:42:55.581749   79367 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:42:55.581757   79367 kubeadm.go:310] 
	I0814 17:42:55.581798   79367 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:42:55.581804   79367 kubeadm.go:310] 
	I0814 17:42:55.581857   79367 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:42:55.581944   79367 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:42:55.582007   79367 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:42:55.582014   79367 kubeadm.go:310] 
	I0814 17:42:55.582085   79367 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:42:55.582148   79367 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:42:55.582154   79367 kubeadm.go:310] 
	I0814 17:42:55.582221   79367 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qzd0yh.k8a8j7f6vmqndeav \
	I0814 17:42:55.582313   79367 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:42:55.582333   79367 kubeadm.go:310] 	--control-plane 
	I0814 17:42:55.582336   79367 kubeadm.go:310] 
	I0814 17:42:55.582426   79367 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:42:55.582434   79367 kubeadm.go:310] 
	I0814 17:42:55.582518   79367 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qzd0yh.k8a8j7f6vmqndeav \
	I0814 17:42:55.582678   79367 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:42:55.582691   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:42:55.582697   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:42:55.584337   79367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:42:55.585493   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:42:55.596201   79367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:42:55.617012   79367 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:42:55.617115   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:55.617152   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-545149 minikube.k8s.io/updated_at=2024_08_14T17_42_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=no-preload-545149 minikube.k8s.io/primary=true
	I0814 17:42:55.794262   79367 ops.go:34] apiserver oom_adj: -16
	I0814 17:42:55.794421   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:56.294450   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:56.795280   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:57.294604   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:57.794700   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:58.294863   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:58.795404   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:59.295066   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:59.794529   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:43:00.294720   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:43:00.409254   79367 kubeadm.go:1113] duration metric: took 4.79220609s to wait for elevateKubeSystemPrivileges
	I0814 17:43:00.409300   79367 kubeadm.go:394] duration metric: took 4m53.401266889s to StartCluster
	I0814 17:43:00.409323   79367 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:43:00.409419   79367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:43:00.411076   79367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:43:00.411313   79367 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:43:00.411438   79367 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:43:00.411521   79367 addons.go:69] Setting storage-provisioner=true in profile "no-preload-545149"
	I0814 17:43:00.411529   79367 addons.go:69] Setting default-storageclass=true in profile "no-preload-545149"
	I0814 17:43:00.411552   79367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-545149"
	I0814 17:43:00.411554   79367 addons.go:234] Setting addon storage-provisioner=true in "no-preload-545149"
	I0814 17:43:00.411564   79367 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:43:00.411568   79367 addons.go:69] Setting metrics-server=true in profile "no-preload-545149"
	W0814 17:43:00.411566   79367 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:43:00.411601   79367 addons.go:234] Setting addon metrics-server=true in "no-preload-545149"
	W0814 17:43:00.411612   79367 addons.go:243] addon metrics-server should already be in state true
	I0814 17:43:00.411637   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.411646   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.411922   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.411954   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412019   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.412053   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412076   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.412103   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412914   79367 out.go:177] * Verifying Kubernetes components...
	I0814 17:43:00.414471   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:43:00.427965   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0814 17:43:00.427966   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41043
	I0814 17:43:00.428460   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.428608   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.428985   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.429004   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.429130   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.429147   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.429206   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0814 17:43:00.429346   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.429443   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.429498   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.429543   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.430131   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.430152   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.430435   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.430446   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.430718   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.431238   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.431270   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.433273   79367 addons.go:234] Setting addon default-storageclass=true in "no-preload-545149"
	W0814 17:43:00.433292   79367 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:43:00.433319   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.433551   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.433581   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.450138   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I0814 17:43:00.450327   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I0814 17:43:00.450697   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.450818   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.451527   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.451547   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.451695   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.451706   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.451958   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.452224   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.452283   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.453110   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.453141   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.453937   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.455467   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I0814 17:43:00.455825   79367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:43:00.455934   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.456456   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.456479   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.456964   79367 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:43:00.456981   79367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:43:00.456999   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.457000   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.457144   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.459264   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.460208   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.460606   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.460636   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.460750   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.460858   79367 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:43:00.460989   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.461150   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.461281   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.462118   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:43:00.462138   79367 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:43:00.462156   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.465200   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.465643   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.465710   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.465829   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.466004   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.466165   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.466312   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.478054   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0814 17:43:00.478616   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.479176   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.479198   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.479536   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.479725   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.481351   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.481574   79367 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:43:00.481588   79367 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:43:00.481606   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.484454   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.484738   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.484771   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.484989   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.485222   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.485370   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.485485   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.617562   79367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:43:00.665134   79367 node_ready.go:35] waiting up to 6m0s for node "no-preload-545149" to be "Ready" ...
	I0814 17:43:00.673659   79367 node_ready.go:49] node "no-preload-545149" has status "Ready":"True"
	I0814 17:43:00.673680   79367 node_ready.go:38] duration metric: took 8.508683ms for node "no-preload-545149" to be "Ready" ...
	I0814 17:43:00.673689   79367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:43:00.680313   79367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:00.810401   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:43:00.827621   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:43:00.871727   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:43:00.871752   79367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:43:00.969061   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:43:00.969088   79367 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:43:01.103808   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:43:01.103839   79367 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:43:01.198160   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:43:01.880623   79367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.052957744s)
	I0814 17:43:01.880683   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.880697   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.880749   79367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070305568s)
	I0814 17:43:01.880785   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.880804   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881075   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881093   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881103   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.881115   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881248   79367 main.go:141] libmachine: (no-preload-545149) DBG | Closing plugin on server side
	I0814 17:43:01.881284   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881312   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881336   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.881375   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881385   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881396   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881682   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881703   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.896050   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.896076   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.896351   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.896370   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.131404   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:02.131427   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:02.131744   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:02.131768   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.131780   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:02.131788   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:02.132004   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:02.132026   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.132042   79367 addons.go:475] Verifying addon metrics-server=true in "no-preload-545149"
	I0814 17:43:02.133699   79367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 17:43:03.265508   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:03.265720   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:02.135365   79367 addons.go:510] duration metric: took 1.72392081s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 17:43:02.687160   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:05.186062   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:07.187193   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:09.188957   79367 pod_ready.go:92] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.188990   79367 pod_ready.go:81] duration metric: took 8.508650006s for pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.189003   79367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.194469   79367 pod_ready.go:92] pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.194492   79367 pod_ready.go:81] duration metric: took 5.48133ms for pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.194501   79367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.199127   79367 pod_ready.go:92] pod "etcd-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.199150   79367 pod_ready.go:81] duration metric: took 4.643296ms for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.199159   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.203804   79367 pod_ready.go:92] pod "kube-apiserver-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.203825   79367 pod_ready.go:81] duration metric: took 4.659513ms for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.203837   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.208443   79367 pod_ready.go:92] pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.208465   79367 pod_ready.go:81] duration metric: took 4.620634ms for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.208479   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6bps" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.584443   79367 pod_ready.go:92] pod "kube-proxy-s6bps" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.584471   79367 pod_ready.go:81] duration metric: took 375.985094ms for pod "kube-proxy-s6bps" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.584481   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.985476   79367 pod_ready.go:92] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.985504   79367 pod_ready.go:81] duration metric: took 401.014791ms for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.985515   79367 pod_ready.go:38] duration metric: took 9.311816641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:43:09.985534   79367 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:43:09.985603   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:43:10.002239   79367 api_server.go:72] duration metric: took 9.590875358s to wait for apiserver process to appear ...
	I0814 17:43:10.002276   79367 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:43:10.002304   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:43:10.009410   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I0814 17:43:10.010351   79367 api_server.go:141] control plane version: v1.31.0
	I0814 17:43:10.010381   79367 api_server.go:131] duration metric: took 8.098086ms to wait for apiserver health ...
	I0814 17:43:10.010389   79367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:43:10.189597   79367 system_pods.go:59] 9 kube-system pods found
	I0814 17:43:10.189629   79367 system_pods.go:61] "coredns-6f6b679f8f-h4dmc" [33f2fdca-15ba-430f-989f-3c569f33a76a] Running
	I0814 17:43:10.189634   79367 system_pods.go:61] "coredns-6f6b679f8f-mpfqf" [7b0e3bf4-41d9-4151-8255-37881e596c20] Running
	I0814 17:43:10.189638   79367 system_pods.go:61] "etcd-no-preload-545149" [5fc2782c-a4c3-46d6-b2d2-3c9325f17ae4] Running
	I0814 17:43:10.189642   79367 system_pods.go:61] "kube-apiserver-no-preload-545149" [3cdde3b9-70b4-4e5e-bc48-ab207c903437] Running
	I0814 17:43:10.189646   79367 system_pods.go:61] "kube-controller-manager-no-preload-545149" [c8f222c9-95a1-4acf-9ca3-068832ed808f] Running
	I0814 17:43:10.189649   79367 system_pods.go:61] "kube-proxy-s6bps" [9165c654-568f-4206-878c-f0c88ccd38cd] Running
	I0814 17:43:10.189652   79367 system_pods.go:61] "kube-scheduler-no-preload-545149" [423d82b6-cb92-408b-a5d6-95305c91400c] Running
	I0814 17:43:10.189658   79367 system_pods.go:61] "metrics-server-6867b74b74-7qljd" [0f0e5d07-eb28-46b3-9270-554006151eda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:43:10.189662   79367 system_pods.go:61] "storage-provisioner" [bc80ba99-eecf-4eb1-bd78-f88792cb3e5a] Running
	I0814 17:43:10.189670   79367 system_pods.go:74] duration metric: took 179.275641ms to wait for pod list to return data ...
	I0814 17:43:10.189678   79367 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:43:10.385690   79367 default_sa.go:45] found service account: "default"
	I0814 17:43:10.385715   79367 default_sa.go:55] duration metric: took 196.030333ms for default service account to be created ...
	I0814 17:43:10.385723   79367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:43:10.590237   79367 system_pods.go:86] 9 kube-system pods found
	I0814 17:43:10.590272   79367 system_pods.go:89] "coredns-6f6b679f8f-h4dmc" [33f2fdca-15ba-430f-989f-3c569f33a76a] Running
	I0814 17:43:10.590279   79367 system_pods.go:89] "coredns-6f6b679f8f-mpfqf" [7b0e3bf4-41d9-4151-8255-37881e596c20] Running
	I0814 17:43:10.590285   79367 system_pods.go:89] "etcd-no-preload-545149" [5fc2782c-a4c3-46d6-b2d2-3c9325f17ae4] Running
	I0814 17:43:10.590291   79367 system_pods.go:89] "kube-apiserver-no-preload-545149" [3cdde3b9-70b4-4e5e-bc48-ab207c903437] Running
	I0814 17:43:10.590299   79367 system_pods.go:89] "kube-controller-manager-no-preload-545149" [c8f222c9-95a1-4acf-9ca3-068832ed808f] Running
	I0814 17:43:10.590306   79367 system_pods.go:89] "kube-proxy-s6bps" [9165c654-568f-4206-878c-f0c88ccd38cd] Running
	I0814 17:43:10.590312   79367 system_pods.go:89] "kube-scheduler-no-preload-545149" [423d82b6-cb92-408b-a5d6-95305c91400c] Running
	I0814 17:43:10.590322   79367 system_pods.go:89] "metrics-server-6867b74b74-7qljd" [0f0e5d07-eb28-46b3-9270-554006151eda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:43:10.590335   79367 system_pods.go:89] "storage-provisioner" [bc80ba99-eecf-4eb1-bd78-f88792cb3e5a] Running
	I0814 17:43:10.590351   79367 system_pods.go:126] duration metric: took 204.620982ms to wait for k8s-apps to be running ...
	I0814 17:43:10.590364   79367 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:43:10.590418   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:10.605594   79367 system_svc.go:56] duration metric: took 15.223089ms WaitForService to wait for kubelet
	I0814 17:43:10.605624   79367 kubeadm.go:582] duration metric: took 10.194267533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:43:10.605644   79367 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:43:10.786127   79367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:43:10.786160   79367 node_conditions.go:123] node cpu capacity is 2
	I0814 17:43:10.786173   79367 node_conditions.go:105] duration metric: took 180.522994ms to run NodePressure ...
	I0814 17:43:10.786187   79367 start.go:241] waiting for startup goroutines ...
	I0814 17:43:10.786197   79367 start.go:246] waiting for cluster config update ...
	I0814 17:43:10.786210   79367 start.go:255] writing updated cluster config ...
	I0814 17:43:10.786498   79367 ssh_runner.go:195] Run: rm -f paused
	I0814 17:43:10.834139   79367 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:43:10.836315   79367 out.go:177] * Done! kubectl is now configured to use "no-preload-545149" cluster and "default" namespace by default
	I0814 17:43:43.267316   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:43.267596   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:43.267623   80228 kubeadm.go:310] 
	I0814 17:43:43.267680   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:43:43.267757   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:43:43.267778   80228 kubeadm.go:310] 
	I0814 17:43:43.267839   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:43:43.267894   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:43:43.268029   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:43:43.268044   80228 kubeadm.go:310] 
	I0814 17:43:43.268190   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:43:43.268247   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:43:43.268296   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:43:43.268305   80228 kubeadm.go:310] 
	I0814 17:43:43.268446   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:43:43.268561   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:43:43.268572   80228 kubeadm.go:310] 
	I0814 17:43:43.268741   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:43:43.268907   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:43:43.269021   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:43:43.269120   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:43:43.269131   80228 kubeadm.go:310] 
	I0814 17:43:43.269560   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:43:43.269642   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:43:43.269698   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 17:43:43.269809   80228 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 17:43:43.269853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:43:43.733975   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:43.748632   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:43:43.758474   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:43:43.758493   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:43:43.758543   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:43:43.767721   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:43:43.767777   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:43:43.777259   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:43:43.786562   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:43:43.786623   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:43:43.795253   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.803677   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:43:43.803733   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.812416   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:43:43.821020   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:43:43.821075   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:43:43.829709   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:43:44.024836   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:45:40.060126   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:45:40.060266   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 17:45:40.061931   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:45:40.062003   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:45:40.062110   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:45:40.062231   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:45:40.062360   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:45:40.062459   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:45:40.063940   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:45:40.064041   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:45:40.064124   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:45:40.064230   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:45:40.064305   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:45:40.064398   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:45:40.064471   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:45:40.064550   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:45:40.064632   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:45:40.064712   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:45:40.064798   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:45:40.064857   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:45:40.064913   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:45:40.064956   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:45:40.065040   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:45:40.065146   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:45:40.065222   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:45:40.065366   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:45:40.065490   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:45:40.065547   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:45:40.065648   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:45:40.068108   80228 out.go:204]   - Booting up control plane ...
	I0814 17:45:40.068211   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:45:40.068294   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:45:40.068395   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:45:40.068522   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:45:40.068675   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:45:40.068751   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:45:40.068843   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069048   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069141   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069393   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069510   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069756   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069823   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069982   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070051   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.070204   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070211   80228 kubeadm.go:310] 
	I0814 17:45:40.070244   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:45:40.070291   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:45:40.070299   80228 kubeadm.go:310] 
	I0814 17:45:40.070337   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:45:40.070379   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:45:40.070504   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:45:40.070521   80228 kubeadm.go:310] 
	I0814 17:45:40.070673   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:45:40.070723   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:45:40.070764   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:45:40.070776   80228 kubeadm.go:310] 
	I0814 17:45:40.070876   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:45:40.070945   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:45:40.070953   80228 kubeadm.go:310] 
	I0814 17:45:40.071045   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:45:40.071151   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:45:40.071246   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:45:40.071363   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:45:40.071453   80228 kubeadm.go:310] 
	I0814 17:45:40.071481   80228 kubeadm.go:394] duration metric: took 8m2.599023024s to StartCluster
	I0814 17:45:40.071554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:45:40.071617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:45:40.115691   80228 cri.go:89] found id: ""
	I0814 17:45:40.115719   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.115727   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:45:40.115734   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:45:40.115798   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:45:40.155537   80228 cri.go:89] found id: ""
	I0814 17:45:40.155566   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.155574   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:45:40.155580   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:45:40.155645   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:45:40.189570   80228 cri.go:89] found id: ""
	I0814 17:45:40.189604   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.189616   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:45:40.189625   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:45:40.189708   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:45:40.222496   80228 cri.go:89] found id: ""
	I0814 17:45:40.222521   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.222528   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:45:40.222533   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:45:40.222590   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:45:40.255095   80228 cri.go:89] found id: ""
	I0814 17:45:40.255129   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.255142   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:45:40.255151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:45:40.255233   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:45:40.290297   80228 cri.go:89] found id: ""
	I0814 17:45:40.290326   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.290341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:45:40.290348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:45:40.290396   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:45:40.326660   80228 cri.go:89] found id: ""
	I0814 17:45:40.326685   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.326695   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:45:40.326701   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:45:40.326764   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:45:40.359867   80228 cri.go:89] found id: ""
	I0814 17:45:40.359896   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.359908   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:45:40.359918   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:45:40.359933   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:45:40.397513   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:45:40.397543   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:45:40.451744   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:45:40.451778   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:45:40.475817   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:45:40.475843   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:45:40.575913   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:45:40.575933   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:45:40.575946   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0814 17:45:40.683455   80228 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 17:45:40.683509   80228 out.go:239] * 
	W0814 17:45:40.683587   80228 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.683623   80228 out.go:239] * 
	W0814 17:45:40.684431   80228 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 17:45:40.688043   80228 out.go:177] 
	W0814 17:45:40.689238   80228 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.689291   80228 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 17:45:40.689318   80228 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 17:45:40.690913   80228 out.go:177] 
	
	
	==> CRI-O <==
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.577985302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657542577948251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fc85cec-eb3e-4909-9e02-4cd71954b2ac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.578472830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad5aa788-e06b-49b9-adb5-b197b163060d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.578523943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad5aa788-e06b-49b9-adb5-b197b163060d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.578571338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ad5aa788-e06b-49b9-adb5-b197b163060d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.610211074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a682312e-75a7-4077-8940-da5f20524b1e name=/runtime.v1.RuntimeService/Version
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.610286062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a682312e-75a7-4077-8940-da5f20524b1e name=/runtime.v1.RuntimeService/Version
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.611350231Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e71ece58-f9a0-4dfe-b598-e7dc402507b4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.611773418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657542611752338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e71ece58-f9a0-4dfe-b598-e7dc402507b4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.612332241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a608a23-5902-47d0-9447-468867c0320f name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.612393513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a608a23-5902-47d0-9447-468867c0320f name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.612437246Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6a608a23-5902-47d0-9447-468867c0320f name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.642302295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c77fe8e1-b32c-404c-a418-620cfb9af0ed name=/runtime.v1.RuntimeService/Version
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.642374187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c77fe8e1-b32c-404c-a418-620cfb9af0ed name=/runtime.v1.RuntimeService/Version
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.643262479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b0a7199-87e9-45e0-a466-33689c73bc6e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.643881956Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657542643847307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b0a7199-87e9-45e0-a466-33689c73bc6e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.644446183Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8779725d-b6b3-41f1-81f8-319bf4314f8b name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.644505627Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8779725d-b6b3-41f1-81f8-319bf4314f8b name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.644541031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8779725d-b6b3-41f1-81f8-319bf4314f8b name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.675011897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6dc55e4e-cb0f-4e9a-b78b-98bdc02dbe27 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.675113280Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6dc55e4e-cb0f-4e9a-b78b-98bdc02dbe27 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.677507751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b81c3d53-5ab1-4290-a3a7-bd94ed73578e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.678806711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657542678726841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b81c3d53-5ab1-4290-a3a7-bd94ed73578e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.680732488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b498ab8a-8e5e-4556-a4e5-c2baceb92354 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.680819874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b498ab8a-8e5e-4556-a4e5-c2baceb92354 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:45:42 old-k8s-version-505584 crio[648]: time="2024-08-14 17:45:42.680853491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b498ab8a-8e5e-4556-a4e5-c2baceb92354 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug14 17:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051751] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038545] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.928700] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.931842] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.538149] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.402686] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.068532] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066584] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.214010] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.127681] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.254794] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.216784] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.064759] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.847232] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[ +11.985584] kauditd_printk_skb: 46 callbacks suppressed
	[Aug14 17:41] systemd-fstab-generator[5130]: Ignoring "noauto" option for root device
	[Aug14 17:43] systemd-fstab-generator[5418]: Ignoring "noauto" option for root device
	[  +0.067751] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 17:45:42 up 8 min,  0 users,  load average: 0.07, 0.11, 0.08
	Linux old-k8s-version-505584 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a17190, 0xc000791f40)
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]: goroutine 169 [syscall]:
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]: syscall.Syscall6(0xe8, 0xf, 0xc000d8fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xf, 0xc000d8fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000a1f740, 0x0, 0x0, 0x0)
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000101a90)
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Aug 14 17:45:39 old-k8s-version-505584 kubelet[5597]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Aug 14 17:45:39 old-k8s-version-505584 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 14 17:45:39 old-k8s-version-505584 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 14 17:45:40 old-k8s-version-505584 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 14 17:45:40 old-k8s-version-505584 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 14 17:45:40 old-k8s-version-505584 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 14 17:45:40 old-k8s-version-505584 kubelet[5654]: I0814 17:45:40.513976    5654 server.go:416] Version: v1.20.0
	Aug 14 17:45:40 old-k8s-version-505584 kubelet[5654]: I0814 17:45:40.514256    5654 server.go:837] Client rotation is on, will bootstrap in background
	Aug 14 17:45:40 old-k8s-version-505584 kubelet[5654]: I0814 17:45:40.516260    5654 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 14 17:45:40 old-k8s-version-505584 kubelet[5654]: W0814 17:45:40.517278    5654 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 14 17:45:40 old-k8s-version-505584 kubelet[5654]: I0814 17:45:40.517319    5654 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505584 -n old-k8s-version-505584
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 2 (228.664596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-505584" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (717.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0814 17:42:19.605651   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:42:21.996934   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-309673 -n embed-certs-309673
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-14 17:50:28.735533645 +0000 UTC m=+6062.420816443
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-309673 -n embed-certs-309673
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-309673 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-309673 logs -n 25: (1.953124489s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-984053 sudo cat                              | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo find                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo crio                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-984053                                       | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-005029 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | disable-driver-mounts-005029                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:30 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-545149             | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-309673            | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-885666  | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC | 14 Aug 24 17:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC |                     |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-545149                  | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-505584        | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC | 14 Aug 24 17:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-309673                 | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-885666       | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:42 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-505584             | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 17:33:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 17:33:46.321266   80228 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:33:46.321519   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321529   80228 out.go:304] Setting ErrFile to fd 2...
	I0814 17:33:46.321533   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321691   80228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:33:46.322185   80228 out.go:298] Setting JSON to false
	I0814 17:33:46.323102   80228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8170,"bootTime":1723648656,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:33:46.323161   80228 start.go:139] virtualization: kvm guest
	I0814 17:33:46.325361   80228 out.go:177] * [old-k8s-version-505584] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:33:46.326668   80228 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:33:46.326679   80228 notify.go:220] Checking for updates...
	I0814 17:33:46.329217   80228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:33:46.330813   80228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:33:46.332019   80228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:33:46.333264   80228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:33:46.334480   80228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:33:46.336108   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:33:46.336521   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.336564   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.351154   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0814 17:33:46.351563   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.352042   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.352061   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.352395   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.352567   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.354248   80228 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 17:33:46.355547   80228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:33:46.355834   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.355865   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.370976   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
	I0814 17:33:46.371452   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.371977   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.372008   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.372376   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.372624   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.407797   80228 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 17:33:46.408905   80228 start.go:297] selected driver: kvm2
	I0814 17:33:46.408918   80228 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.409022   80228 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:33:46.409677   80228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.409753   80228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:33:46.424801   80228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:33:46.425288   80228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:33:46.425338   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:33:46.425349   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:33:46.425396   80228 start.go:340] cluster config:
	{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.425518   80228 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.427224   80228 out.go:177] * Starting "old-k8s-version-505584" primary control-plane node in "old-k8s-version-505584" cluster
	I0814 17:33:46.428485   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:33:46.428516   80228 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:33:46.428523   80228 cache.go:56] Caching tarball of preloaded images
	I0814 17:33:46.428589   80228 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:33:46.428600   80228 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 17:33:46.428727   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:33:46.428899   80228 start.go:360] acquireMachinesLock for old-k8s-version-505584: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:33:47.579625   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:50.651557   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:56.731587   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:59.803787   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:05.883582   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:08.959564   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:15.035593   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:18.107634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:24.187624   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:27.259634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:33.339631   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:36.411675   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:42.491633   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:45.563609   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:51.643582   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:54.715620   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:00.795564   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:03.867637   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:09.947634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:13.019646   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:19.099578   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:22.171640   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:28.251634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:31.323645   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:37.403627   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:40.475635   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:46.555591   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:49.627635   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:55.707632   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:58.779532   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:04.859619   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:07.931632   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:14.011612   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:17.083624   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:23.163638   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:26.235638   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:29.240279   79521 start.go:364] duration metric: took 4m23.88398072s to acquireMachinesLock for "embed-certs-309673"
	I0814 17:36:29.240341   79521 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:36:29.240351   79521 fix.go:54] fixHost starting: 
	I0814 17:36:29.240703   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:36:29.240730   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:36:29.255901   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0814 17:36:29.256372   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:36:29.256816   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:36:29.256839   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:36:29.257153   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:36:29.257337   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:29.257518   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:36:29.259382   79521 fix.go:112] recreateIfNeeded on embed-certs-309673: state=Stopped err=<nil>
	I0814 17:36:29.259419   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	W0814 17:36:29.259583   79521 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:36:29.261931   79521 out.go:177] * Restarting existing kvm2 VM for "embed-certs-309673" ...
	I0814 17:36:29.263301   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Start
	I0814 17:36:29.263487   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring networks are active...
	I0814 17:36:29.264251   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring network default is active
	I0814 17:36:29.264797   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring network mk-embed-certs-309673 is active
	I0814 17:36:29.265331   79521 main.go:141] libmachine: (embed-certs-309673) Getting domain xml...
	I0814 17:36:29.266055   79521 main.go:141] libmachine: (embed-certs-309673) Creating domain...
	I0814 17:36:29.237663   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:36:29.237704   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:36:29.238088   79367 buildroot.go:166] provisioning hostname "no-preload-545149"
	I0814 17:36:29.238131   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:36:29.238337   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:36:29.240159   79367 machine.go:97] duration metric: took 4m37.421920583s to provisionDockerMachine
	I0814 17:36:29.240195   79367 fix.go:56] duration metric: took 4m37.443181113s for fixHost
	I0814 17:36:29.240202   79367 start.go:83] releasing machines lock for "no-preload-545149", held for 4m37.443414836s
	W0814 17:36:29.240223   79367 start.go:714] error starting host: provision: host is not running
	W0814 17:36:29.240348   79367 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0814 17:36:29.240358   79367 start.go:729] Will try again in 5 seconds ...
	I0814 17:36:30.482377   79521 main.go:141] libmachine: (embed-certs-309673) Waiting to get IP...
	I0814 17:36:30.483405   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:30.483750   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:30.483837   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:30.483729   80776 retry.go:31] will retry after 224.900105ms: waiting for machine to come up
	I0814 17:36:30.710259   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:30.710718   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:30.710748   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:30.710679   80776 retry.go:31] will retry after 322.892012ms: waiting for machine to come up
	I0814 17:36:31.035358   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.035807   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.035835   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.035757   80776 retry.go:31] will retry after 374.226901ms: waiting for machine to come up
	I0814 17:36:31.411228   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.411783   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.411813   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.411717   80776 retry.go:31] will retry after 472.149905ms: waiting for machine to come up
	I0814 17:36:31.885265   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.885787   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.885810   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.885757   80776 retry.go:31] will retry after 676.063343ms: waiting for machine to come up
	I0814 17:36:32.563206   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:32.563711   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:32.563745   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:32.563658   80776 retry.go:31] will retry after 904.634039ms: waiting for machine to come up
	I0814 17:36:33.469832   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:33.470255   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:33.470278   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:33.470206   80776 retry.go:31] will retry after 1.132974911s: waiting for machine to come up
	I0814 17:36:34.605040   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:34.605542   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:34.605576   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:34.605498   80776 retry.go:31] will retry after 1.210457498s: waiting for machine to come up
	I0814 17:36:34.242590   79367 start.go:360] acquireMachinesLock for no-preload-545149: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:36:35.817809   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:35.818152   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:35.818177   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:35.818111   80776 retry.go:31] will retry after 1.275236618s: waiting for machine to come up
	I0814 17:36:37.095551   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:37.095975   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:37.096001   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:37.095937   80776 retry.go:31] will retry after 1.716925001s: waiting for machine to come up
	I0814 17:36:38.814927   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:38.815916   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:38.815943   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:38.815864   80776 retry.go:31] will retry after 2.040428036s: waiting for machine to come up
	I0814 17:36:40.858640   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:40.859157   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:40.859188   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:40.859108   80776 retry.go:31] will retry after 2.259949864s: waiting for machine to come up
	I0814 17:36:43.120436   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:43.120913   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:43.120939   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:43.120879   80776 retry.go:31] will retry after 3.64334808s: waiting for machine to come up
	I0814 17:36:47.975977   79871 start.go:364] duration metric: took 3m52.18367446s to acquireMachinesLock for "default-k8s-diff-port-885666"
	I0814 17:36:47.976049   79871 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:36:47.976064   79871 fix.go:54] fixHost starting: 
	I0814 17:36:47.976457   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:36:47.976492   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:36:47.993513   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0814 17:36:47.993940   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:36:47.994480   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:36:47.994504   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:36:47.994815   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:36:47.995005   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:36:47.995181   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:36:47.996716   79871 fix.go:112] recreateIfNeeded on default-k8s-diff-port-885666: state=Stopped err=<nil>
	I0814 17:36:47.996755   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	W0814 17:36:47.996923   79871 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:36:47.998967   79871 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-885666" ...
	I0814 17:36:46.766908   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.767458   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has current primary IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.767500   79521 main.go:141] libmachine: (embed-certs-309673) Found IP for machine: 192.168.61.2
	I0814 17:36:46.767516   79521 main.go:141] libmachine: (embed-certs-309673) Reserving static IP address...
	I0814 17:36:46.767974   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "embed-certs-309673", mac: "52:54:00:ed:61:4e", ip: "192.168.61.2"} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.767993   79521 main.go:141] libmachine: (embed-certs-309673) Reserved static IP address: 192.168.61.2
	I0814 17:36:46.768006   79521 main.go:141] libmachine: (embed-certs-309673) DBG | skip adding static IP to network mk-embed-certs-309673 - found existing host DHCP lease matching {name: "embed-certs-309673", mac: "52:54:00:ed:61:4e", ip: "192.168.61.2"}
	I0814 17:36:46.768017   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Getting to WaitForSSH function...
	I0814 17:36:46.768023   79521 main.go:141] libmachine: (embed-certs-309673) Waiting for SSH to be available...
	I0814 17:36:46.770187   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.770517   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.770548   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.770612   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Using SSH client type: external
	I0814 17:36:46.770643   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa (-rw-------)
	I0814 17:36:46.770672   79521 main.go:141] libmachine: (embed-certs-309673) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:36:46.770697   79521 main.go:141] libmachine: (embed-certs-309673) DBG | About to run SSH command:
	I0814 17:36:46.770703   79521 main.go:141] libmachine: (embed-certs-309673) DBG | exit 0
	I0814 17:36:46.895078   79521 main.go:141] libmachine: (embed-certs-309673) DBG | SSH cmd err, output: <nil>: 
	I0814 17:36:46.895444   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetConfigRaw
	I0814 17:36:46.896033   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:46.898715   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.899085   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.899117   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.899434   79521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/config.json ...
	I0814 17:36:46.899701   79521 machine.go:94] provisionDockerMachine start ...
	I0814 17:36:46.899723   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:46.899906   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:46.901985   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.902244   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.902268   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.902398   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:46.902564   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:46.902707   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:46.902829   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:46.902966   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:46.903201   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:46.903213   79521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:36:47.007289   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:36:47.007313   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.007589   79521 buildroot.go:166] provisioning hostname "embed-certs-309673"
	I0814 17:36:47.007608   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.007802   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.010311   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.010631   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.010670   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.010805   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.010956   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.011067   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.011160   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.011269   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.011455   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.011467   79521 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-309673 && echo "embed-certs-309673" | sudo tee /etc/hostname
	I0814 17:36:47.128575   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-309673
	
	I0814 17:36:47.128601   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.131125   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.131464   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.131493   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.131655   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.131970   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.132146   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.132286   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.132457   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.132614   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.132630   79521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-309673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-309673/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-309673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:36:47.247426   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:36:47.247469   79521 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:36:47.247486   79521 buildroot.go:174] setting up certificates
	I0814 17:36:47.247496   79521 provision.go:84] configureAuth start
	I0814 17:36:47.247506   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.247768   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:47.250616   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.250993   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.251018   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.251148   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.253149   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.253436   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.253465   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.253551   79521 provision.go:143] copyHostCerts
	I0814 17:36:47.253616   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:36:47.253628   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:36:47.253703   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:36:47.253817   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:36:47.253835   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:36:47.253875   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:36:47.253952   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:36:47.253962   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:36:47.253994   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:36:47.254060   79521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.embed-certs-309673 san=[127.0.0.1 192.168.61.2 embed-certs-309673 localhost minikube]
	I0814 17:36:47.338831   79521 provision.go:177] copyRemoteCerts
	I0814 17:36:47.338892   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:36:47.338921   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.341582   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.341897   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.341915   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.342053   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.342237   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.342374   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.342497   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.424777   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:36:47.446682   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:36:47.467672   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:36:47.488423   79521 provision.go:87] duration metric: took 240.914172ms to configureAuth
	I0814 17:36:47.488453   79521 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:36:47.488645   79521 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:36:47.488733   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.491453   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.491793   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.491816   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.492028   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.492216   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.492351   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.492479   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.492716   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.492909   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.492931   79521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:36:47.746210   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:36:47.746248   79521 machine.go:97] duration metric: took 846.530779ms to provisionDockerMachine
	I0814 17:36:47.746260   79521 start.go:293] postStartSetup for "embed-certs-309673" (driver="kvm2")
	I0814 17:36:47.746274   79521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:36:47.746297   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.746659   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:36:47.746694   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.749342   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.749674   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.749702   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.749831   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.750004   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.750126   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.750272   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.833279   79521 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:36:47.837076   79521 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:36:47.837099   79521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:36:47.837183   79521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:36:47.837269   79521 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:36:47.837387   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:36:47.845640   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:36:47.866978   79521 start.go:296] duration metric: took 120.70557ms for postStartSetup
	I0814 17:36:47.867012   79521 fix.go:56] duration metric: took 18.626661733s for fixHost
	I0814 17:36:47.867030   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.869687   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.870016   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.870046   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.870220   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.870399   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.870660   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.870827   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.870999   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.871209   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.871221   79521 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:36:47.975817   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657007.950271601
	
	I0814 17:36:47.975848   79521 fix.go:216] guest clock: 1723657007.950271601
	I0814 17:36:47.975860   79521 fix.go:229] Guest: 2024-08-14 17:36:47.950271601 +0000 UTC Remote: 2024-08-14 17:36:47.867016056 +0000 UTC m=+282.648397849 (delta=83.255545ms)
	I0814 17:36:47.975889   79521 fix.go:200] guest clock delta is within tolerance: 83.255545ms
	I0814 17:36:47.975896   79521 start.go:83] releasing machines lock for "embed-certs-309673", held for 18.735575335s
	I0814 17:36:47.975931   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.976213   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:47.978934   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.979457   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.979483   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.979625   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980134   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980303   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980382   79521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:36:47.980428   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.980574   79521 ssh_runner.go:195] Run: cat /version.json
	I0814 17:36:47.980603   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.983247   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983557   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983649   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.983687   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983828   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.984032   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.984042   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.984063   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.984183   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.984232   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.984320   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.984412   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.984467   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.984608   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:48.064891   79521 ssh_runner.go:195] Run: systemctl --version
	I0814 17:36:48.101403   79521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:36:48.239841   79521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:36:48.245634   79521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:36:48.245718   79521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:36:48.260517   79521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:36:48.260543   79521 start.go:495] detecting cgroup driver to use...
	I0814 17:36:48.260597   79521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:36:48.275003   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:36:48.290316   79521 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:36:48.290376   79521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:36:48.304351   79521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:36:48.320954   79521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:36:48.434176   79521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:36:48.582137   79521 docker.go:233] disabling docker service ...
	I0814 17:36:48.582217   79521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:36:48.595784   79521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:36:48.608379   79521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:36:48.735500   79521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:36:48.876194   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:36:48.891826   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:36:48.910820   79521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:36:48.910887   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.921125   79521 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:36:48.921198   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.931615   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.942779   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.953124   79521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:36:48.963454   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.974457   79521 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.991583   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:49.006059   79521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:36:49.015586   79521 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:36:49.015649   79521 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:36:49.028742   79521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:36:49.038126   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:36:49.155387   79521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:36:49.318598   79521 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:36:49.318679   79521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:36:49.323575   79521 start.go:563] Will wait 60s for crictl version
	I0814 17:36:49.323636   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:36:49.327233   79521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:36:49.369724   79521 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:36:49.369814   79521 ssh_runner.go:195] Run: crio --version
	I0814 17:36:49.399516   79521 ssh_runner.go:195] Run: crio --version
	I0814 17:36:49.431594   79521 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:36:49.432940   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:49.435776   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:49.436168   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:49.436199   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:49.436447   79521 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 17:36:49.440606   79521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:36:49.453159   79521 kubeadm.go:883] updating cluster {Name:embed-certs-309673 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:36:49.453272   79521 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:36:49.453311   79521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:36:49.486635   79521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:36:49.486708   79521 ssh_runner.go:195] Run: which lz4
	I0814 17:36:49.490626   79521 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:36:49.494822   79521 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:36:49.494852   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:36:48.000271   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Start
	I0814 17:36:48.000453   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring networks are active...
	I0814 17:36:48.001246   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring network default is active
	I0814 17:36:48.001621   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring network mk-default-k8s-diff-port-885666 is active
	I0814 17:36:48.002158   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Getting domain xml...
	I0814 17:36:48.002982   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Creating domain...
	I0814 17:36:49.272729   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting to get IP...
	I0814 17:36:49.273726   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.274182   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.274273   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.274157   80921 retry.go:31] will retry after 208.258845ms: waiting for machine to come up
	I0814 17:36:49.483781   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.484251   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.484278   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.484211   80921 retry.go:31] will retry after 318.193974ms: waiting for machine to come up
	I0814 17:36:49.803815   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.804311   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.804339   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.804277   80921 retry.go:31] will retry after 426.023242ms: waiting for machine to come up
	I0814 17:36:50.232060   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.232610   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.232646   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:50.232519   80921 retry.go:31] will retry after 534.392065ms: waiting for machine to come up
	I0814 17:36:50.745416   79521 crio.go:462] duration metric: took 1.254815826s to copy over tarball
	I0814 17:36:50.745515   79521 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:36:52.865848   79521 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.120299454s)
	I0814 17:36:52.865879   79521 crio.go:469] duration metric: took 2.120437156s to extract the tarball
	I0814 17:36:52.865887   79521 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:36:52.901808   79521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:36:52.946366   79521 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:36:52.946386   79521 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:36:52.946394   79521 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.31.0 crio true true} ...
	I0814 17:36:52.946492   79521 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-309673 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:36:52.946556   79521 ssh_runner.go:195] Run: crio config
	I0814 17:36:52.992520   79521 cni.go:84] Creating CNI manager for ""
	I0814 17:36:52.992541   79521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:36:52.992553   79521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:36:52.992577   79521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-309673 NodeName:embed-certs-309673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:36:52.992740   79521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-309673"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:36:52.992811   79521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:36:53.002460   79521 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:36:53.002539   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:36:53.011167   79521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 17:36:53.026436   79521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:36:53.041728   79521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0814 17:36:53.059102   79521 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0814 17:36:53.062728   79521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:36:53.073803   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:36:53.200870   79521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:36:53.217448   79521 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673 for IP: 192.168.61.2
	I0814 17:36:53.217472   79521 certs.go:194] generating shared ca certs ...
	I0814 17:36:53.217495   79521 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:36:53.217694   79521 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:36:53.217755   79521 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:36:53.217766   79521 certs.go:256] generating profile certs ...
	I0814 17:36:53.217876   79521 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/client.key
	I0814 17:36:53.217961   79521 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.key.83510bb8
	I0814 17:36:53.218034   79521 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.key
	I0814 17:36:53.218202   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:36:53.218248   79521 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:36:53.218272   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:36:53.218309   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:36:53.218343   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:36:53.218380   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:36:53.218447   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:36:53.219187   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:36:53.273437   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:36:53.307566   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:36:53.330107   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:36:53.360324   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0814 17:36:53.386974   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 17:36:53.409537   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:36:53.433873   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:36:53.456408   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:36:53.478233   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:36:53.500264   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:36:53.522440   79521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:36:53.538977   79521 ssh_runner.go:195] Run: openssl version
	I0814 17:36:53.544866   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:36:53.555085   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.559340   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.559399   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.565106   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:36:53.575561   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:36:53.585605   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.589838   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.589911   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.595165   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:36:53.604934   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:36:53.615153   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.619362   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.619435   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.624949   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:36:53.635459   79521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:36:53.639814   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:36:53.645419   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:36:53.651013   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:36:53.657004   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:36:53.662540   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:36:53.668187   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:36:53.673762   79521 kubeadm.go:392] StartCluster: {Name:embed-certs-309673 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:36:53.673867   79521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:36:53.673930   79521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:36:53.709404   79521 cri.go:89] found id: ""
	I0814 17:36:53.709490   79521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:36:53.719041   79521 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:36:53.719068   79521 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:36:53.719123   79521 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:36:53.728077   79521 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:36:53.729030   79521 kubeconfig.go:125] found "embed-certs-309673" server: "https://192.168.61.2:8443"
	I0814 17:36:53.730943   79521 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:36:53.739841   79521 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0814 17:36:53.739872   79521 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:36:53.739886   79521 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:36:53.739947   79521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:36:53.777400   79521 cri.go:89] found id: ""
	I0814 17:36:53.777476   79521 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:36:53.792838   79521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:36:53.802189   79521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:36:53.802223   79521 kubeadm.go:157] found existing configuration files:
	
	I0814 17:36:53.802278   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:36:53.813778   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:36:53.813854   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:36:53.825962   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:36:53.834929   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:36:53.834987   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:36:53.846315   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:36:53.855138   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:36:53.855206   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:36:53.864109   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:36:53.872613   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:36:53.872672   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:36:53.881307   79521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:36:53.890148   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.002103   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.664940   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.868608   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.932317   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:55.006430   79521 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:36:55.006523   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:50.768099   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.768599   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.768629   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:50.768554   80921 retry.go:31] will retry after 487.741283ms: waiting for machine to come up
	I0814 17:36:51.258499   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:51.259020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:51.259047   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:51.258975   80921 retry.go:31] will retry after 831.435484ms: waiting for machine to come up
	I0814 17:36:52.091900   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:52.092297   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:52.092351   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:52.092249   80921 retry.go:31] will retry after 1.067858402s: waiting for machine to come up
	I0814 17:36:53.161928   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:53.162393   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:53.162449   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:53.162366   80921 retry.go:31] will retry after 1.33971606s: waiting for machine to come up
	I0814 17:36:54.503810   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:54.504184   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:54.504214   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:54.504121   80921 retry.go:31] will retry after 1.4882184s: waiting for machine to come up
	I0814 17:36:55.506634   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:56.007367   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:56.507265   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:57.007343   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:57.026436   79521 api_server.go:72] duration metric: took 2.020005984s to wait for apiserver process to appear ...
	I0814 17:36:57.026471   79521 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:36:57.026496   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:36:55.994824   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:55.995255   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:55.995283   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:55.995206   80921 retry.go:31] will retry after 1.65461779s: waiting for machine to come up
	I0814 17:36:57.651449   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:57.651837   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:57.651867   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:57.651794   80921 retry.go:31] will retry after 2.38071296s: waiting for machine to come up
	I0814 17:37:00.033719   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:00.034261   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:37:00.034290   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:37:00.034204   80921 retry.go:31] will retry after 3.476533232s: waiting for machine to come up
	I0814 17:37:00.329636   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:00.329674   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:00.329689   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:00.357287   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:00.357334   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:00.527150   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:00.536020   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:00.536058   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:01.026558   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:01.034241   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:01.034271   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:01.526814   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:01.536226   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:01.536267   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:02.026791   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:02.031068   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 200:
	ok
	I0814 17:37:02.037240   79521 api_server.go:141] control plane version: v1.31.0
	I0814 17:37:02.037266   79521 api_server.go:131] duration metric: took 5.010786446s to wait for apiserver health ...
	I0814 17:37:02.037278   79521 cni.go:84] Creating CNI manager for ""
	I0814 17:37:02.037286   79521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:02.039248   79521 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:37:02.040543   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:37:02.050754   79521 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:37:02.067333   79521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:37:02.076082   79521 system_pods.go:59] 8 kube-system pods found
	I0814 17:37:02.076115   79521 system_pods.go:61] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:37:02.076130   79521 system_pods.go:61] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:37:02.076139   79521 system_pods.go:61] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:37:02.076178   79521 system_pods.go:61] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:37:02.076198   79521 system_pods.go:61] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:37:02.076218   79521 system_pods.go:61] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:37:02.076233   79521 system_pods.go:61] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:37:02.076242   79521 system_pods.go:61] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:37:02.076253   79521 system_pods.go:74] duration metric: took 8.901356ms to wait for pod list to return data ...
	I0814 17:37:02.076266   79521 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:37:02.080064   79521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:37:02.080087   79521 node_conditions.go:123] node cpu capacity is 2
	I0814 17:37:02.080101   79521 node_conditions.go:105] duration metric: took 3.829329ms to run NodePressure ...
	I0814 17:37:02.080121   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:02.359163   79521 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:37:02.368689   79521 kubeadm.go:739] kubelet initialised
	I0814 17:37:02.368717   79521 kubeadm.go:740] duration metric: took 9.524301ms waiting for restarted kubelet to initialise ...
	I0814 17:37:02.368728   79521 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:02.376056   79521 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.381317   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.381347   79521 pod_ready.go:81] duration metric: took 5.262062ms for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.381359   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.381370   79521 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.386799   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "etcd-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.386822   79521 pod_ready.go:81] duration metric: took 5.440585ms for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.386832   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "etcd-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.386838   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.392829   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.392853   79521 pod_ready.go:81] duration metric: took 6.003762ms for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.392864   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.392874   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.470943   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.470975   79521 pod_ready.go:81] duration metric: took 78.089715ms for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.470984   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.470996   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.870134   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-proxy-z8x9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.870163   79521 pod_ready.go:81] duration metric: took 399.157385ms for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.870175   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-proxy-z8x9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.870183   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:03.270805   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.270837   79521 pod_ready.go:81] duration metric: took 400.647029ms for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:03.270848   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.270856   79521 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:03.671023   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.671058   79521 pod_ready.go:81] duration metric: took 400.191147ms for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:03.671070   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.671079   79521 pod_ready.go:38] duration metric: took 1.302340033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:03.671098   79521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:37:03.683676   79521 ops.go:34] apiserver oom_adj: -16
	I0814 17:37:03.683701   79521 kubeadm.go:597] duration metric: took 9.964625256s to restartPrimaryControlPlane
	I0814 17:37:03.683712   79521 kubeadm.go:394] duration metric: took 10.009956133s to StartCluster
	I0814 17:37:03.683729   79521 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:03.683809   79521 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:03.685474   79521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:03.685708   79521 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:37:03.685766   79521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:37:03.685850   79521 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-309673"
	I0814 17:37:03.685862   79521 addons.go:69] Setting default-storageclass=true in profile "embed-certs-309673"
	I0814 17:37:03.685900   79521 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-309673"
	I0814 17:37:03.685907   79521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-309673"
	W0814 17:37:03.685911   79521 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:37:03.685933   79521 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:03.685933   79521 addons.go:69] Setting metrics-server=true in profile "embed-certs-309673"
	I0814 17:37:03.685988   79521 addons.go:234] Setting addon metrics-server=true in "embed-certs-309673"
	W0814 17:37:03.686006   79521 addons.go:243] addon metrics-server should already be in state true
	I0814 17:37:03.685945   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.686076   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.686284   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686362   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.686391   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686422   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.686482   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686538   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.687598   79521 out.go:177] * Verifying Kubernetes components...
	I0814 17:37:03.688995   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:03.701610   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I0814 17:37:03.702174   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.702789   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.702817   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.703223   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.703682   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.704077   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0814 17:37:03.704508   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.704864   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0814 17:37:03.705141   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.705154   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.705224   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.705473   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.705656   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.705670   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.706806   79521 addons.go:234] Setting addon default-storageclass=true in "embed-certs-309673"
	W0814 17:37:03.706824   79521 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:37:03.706851   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.707093   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.707112   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.707420   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.707536   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.707584   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.708017   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.708079   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.722383   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0814 17:37:03.722779   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.723288   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.723307   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.728799   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0814 17:37:03.728839   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38781
	I0814 17:37:03.728928   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.729426   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.729495   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.729776   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.729809   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.729951   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.729951   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.729967   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.729973   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.730360   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.730371   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.730698   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.730749   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.732979   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.733596   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.735250   79521 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:03.735262   79521 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:37:03.736576   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:37:03.736593   79521 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:37:03.736607   79521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:37:03.736612   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.736620   79521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:37:03.736637   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.740008   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740123   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740491   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.740558   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740676   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.740819   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.740842   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740872   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.740994   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.741120   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.741160   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.741527   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.741692   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.741817   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.749144   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0814 17:37:03.749482   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.749914   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.749929   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.750267   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.750467   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.752107   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.752325   79521 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:37:03.752339   79521 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:37:03.752360   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.754559   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.754845   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.754859   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.755073   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.755247   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.755402   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.755529   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.877535   79521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:03.897022   79521 node_ready.go:35] waiting up to 6m0s for node "embed-certs-309673" to be "Ready" ...
	I0814 17:37:03.951512   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:37:03.988066   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:37:03.988085   79521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:37:04.014925   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:37:04.025506   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:37:04.025531   79521 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:37:04.072457   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:37:04.072480   79521 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:37:04.104804   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:37:05.067867   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.116315804s)
	I0814 17:37:05.067888   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.052939793s)
	I0814 17:37:05.067925   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.067935   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068000   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068023   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068241   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068322   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068336   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068345   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068364   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068454   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068485   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068497   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068505   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068518   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068795   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068815   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068823   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068830   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068872   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068905   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.087716   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.087746   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.088086   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.088106   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.113388   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.008529856s)
	I0814 17:37:05.113441   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.113458   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.113736   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.113787   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.113800   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.113812   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.113823   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.114057   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.114071   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.114081   79521 addons.go:475] Verifying addon metrics-server=true in "embed-certs-309673"
	I0814 17:37:05.114163   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.116443   79521 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 17:37:05.118087   79521 addons.go:510] duration metric: took 1.432323959s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 17:37:03.512364   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:03.512842   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:37:03.512880   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:37:03.512785   80921 retry.go:31] will retry after 4.358649621s: waiting for machine to come up
	I0814 17:37:09.324026   80228 start.go:364] duration metric: took 3m22.895078586s to acquireMachinesLock for "old-k8s-version-505584"
	I0814 17:37:09.324085   80228 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:09.324101   80228 fix.go:54] fixHost starting: 
	I0814 17:37:09.324533   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:09.324575   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:09.344085   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
	I0814 17:37:09.344490   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:09.344980   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:37:09.345006   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:09.345416   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:09.345674   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:09.345842   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetState
	I0814 17:37:09.347489   80228 fix.go:112] recreateIfNeeded on old-k8s-version-505584: state=Stopped err=<nil>
	I0814 17:37:09.347511   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	W0814 17:37:09.347696   80228 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:09.349747   80228 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-505584" ...
	I0814 17:37:05.901013   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:07.901054   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:07.876377   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.876820   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has current primary IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.876845   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Found IP for machine: 192.168.50.184
	I0814 17:37:07.876857   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Reserving static IP address...
	I0814 17:37:07.877281   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-885666", mac: "52:54:00:f8:cc:3c", ip: "192.168.50.184"} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:07.877300   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Reserved static IP address: 192.168.50.184
	I0814 17:37:07.877320   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | skip adding static IP to network mk-default-k8s-diff-port-885666 - found existing host DHCP lease matching {name: "default-k8s-diff-port-885666", mac: "52:54:00:f8:cc:3c", ip: "192.168.50.184"}
	I0814 17:37:07.877339   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Getting to WaitForSSH function...
	I0814 17:37:07.877355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for SSH to be available...
	I0814 17:37:07.879843   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.880200   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:07.880242   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.880419   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Using SSH client type: external
	I0814 17:37:07.880445   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa (-rw-------)
	I0814 17:37:07.880496   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:07.880517   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | About to run SSH command:
	I0814 17:37:07.880549   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | exit 0
	I0814 17:37:08.007553   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:08.007929   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetConfigRaw
	I0814 17:37:08.009171   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:08.012358   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.012772   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.012804   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.013076   79871 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/config.json ...
	I0814 17:37:08.013284   79871 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:08.013310   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:08.013579   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.015965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.016325   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.016363   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.016491   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.016680   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.016873   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.017004   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.017140   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.017354   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.017376   79871 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:08.132369   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:08.132404   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.132657   79871 buildroot.go:166] provisioning hostname "default-k8s-diff-port-885666"
	I0814 17:37:08.132695   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.132906   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.136230   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.136669   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.136696   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.136937   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.137163   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.137350   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.137500   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.137672   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.137878   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.137900   79871 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-885666 && echo "default-k8s-diff-port-885666" | sudo tee /etc/hostname
	I0814 17:37:08.273593   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-885666
	
	I0814 17:37:08.273626   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.276470   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.276830   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.276862   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.277137   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.277382   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.277547   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.277713   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.277855   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.278052   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.278072   79871 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-885666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-885666/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-885666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:08.401522   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:08.401556   79871 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:08.401602   79871 buildroot.go:174] setting up certificates
	I0814 17:37:08.401626   79871 provision.go:84] configureAuth start
	I0814 17:37:08.401650   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.401963   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:08.404855   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.405251   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.405285   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.405521   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.407826   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.408338   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.408371   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.408515   79871 provision.go:143] copyHostCerts
	I0814 17:37:08.408583   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:08.408597   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:08.408681   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:08.408812   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:08.408823   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:08.408861   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:08.408947   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:08.408956   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:08.408984   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:08.409064   79871 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-885666 san=[127.0.0.1 192.168.50.184 default-k8s-diff-port-885666 localhost minikube]
	I0814 17:37:08.613459   79871 provision.go:177] copyRemoteCerts
	I0814 17:37:08.613530   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:08.613575   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.616704   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.617044   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.617072   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.617324   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.617515   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.617698   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.617844   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:08.705505   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:08.728835   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0814 17:37:08.751995   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:08.774577   79871 provision.go:87] duration metric: took 372.933752ms to configureAuth
	I0814 17:37:08.774609   79871 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:08.774812   79871 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:08.774880   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.777840   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.778235   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.778260   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.778527   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.778752   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.778899   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.779020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.779162   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.779437   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.779458   79871 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:09.055900   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:09.055927   79871 machine.go:97] duration metric: took 1.04262996s to provisionDockerMachine
	I0814 17:37:09.055943   79871 start.go:293] postStartSetup for "default-k8s-diff-port-885666" (driver="kvm2")
	I0814 17:37:09.055957   79871 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:09.055982   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.056325   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:09.056355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.059396   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.059853   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.059888   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.060064   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.060280   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.060558   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.060745   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.150649   79871 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:09.155263   79871 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:09.155295   79871 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:09.155400   79871 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:09.155500   79871 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:09.155623   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:09.167051   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:09.197223   79871 start.go:296] duration metric: took 141.264897ms for postStartSetup
	I0814 17:37:09.197324   79871 fix.go:56] duration metric: took 21.221265818s for fixHost
	I0814 17:37:09.197356   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.201388   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.201965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.202011   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.202109   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.202354   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.202569   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.202800   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.203010   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:09.203196   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:09.203209   79871 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:09.323868   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657029.302975780
	
	I0814 17:37:09.323892   79871 fix.go:216] guest clock: 1723657029.302975780
	I0814 17:37:09.323900   79871 fix.go:229] Guest: 2024-08-14 17:37:09.30297578 +0000 UTC Remote: 2024-08-14 17:37:09.197335302 +0000 UTC m=+253.546385360 (delta=105.640478ms)
	I0814 17:37:09.323918   79871 fix.go:200] guest clock delta is within tolerance: 105.640478ms
	I0814 17:37:09.323923   79871 start.go:83] releasing machines lock for "default-k8s-diff-port-885666", held for 21.347903434s
	I0814 17:37:09.323948   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.324209   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:09.327260   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.327802   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.327833   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.327993   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328500   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328727   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328814   79871 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:09.328862   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.328955   79871 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:09.328972   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.331813   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332081   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332233   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.332274   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332365   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.332490   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.332512   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332555   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.332669   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.332761   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.332824   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.332882   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.332926   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.333021   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.416041   79871 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:09.456024   79871 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:09.604623   79871 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:09.610562   79871 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:09.610624   79871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:09.627298   79871 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:09.627344   79871 start.go:495] detecting cgroup driver to use...
	I0814 17:37:09.627418   79871 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:09.648212   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:09.666047   79871 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:09.666107   79871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:09.681875   79871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:09.695920   79871 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:09.824502   79871 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:09.979561   79871 docker.go:233] disabling docker service ...
	I0814 17:37:09.979658   79871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:09.996877   79871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:10.014264   79871 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:10.166653   79871 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:10.288261   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:10.301868   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:10.320716   79871 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:37:10.320788   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.331099   79871 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:10.331158   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.342841   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.353762   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.364604   79871 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:10.376521   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.386787   79871 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.406713   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.418047   79871 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:10.428368   79871 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:10.428433   79871 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:10.442759   79871 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:10.452993   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:10.563097   79871 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:10.716953   79871 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:10.717031   79871 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:10.722685   79871 start.go:563] Will wait 60s for crictl version
	I0814 17:37:10.722759   79871 ssh_runner.go:195] Run: which crictl
	I0814 17:37:10.726621   79871 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:10.764534   79871 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:10.764628   79871 ssh_runner.go:195] Run: crio --version
	I0814 17:37:10.791513   79871 ssh_runner.go:195] Run: crio --version
	I0814 17:37:10.822380   79871 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:37:09.351136   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .Start
	I0814 17:37:09.351338   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring networks are active...
	I0814 17:37:09.352075   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network default is active
	I0814 17:37:09.352333   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network mk-old-k8s-version-505584 is active
	I0814 17:37:09.352701   80228 main.go:141] libmachine: (old-k8s-version-505584) Getting domain xml...
	I0814 17:37:09.353363   80228 main.go:141] libmachine: (old-k8s-version-505584) Creating domain...
	I0814 17:37:10.664390   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting to get IP...
	I0814 17:37:10.665484   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.665915   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.665980   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.665888   81116 retry.go:31] will retry after 285.047327ms: waiting for machine to come up
	I0814 17:37:10.952552   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.953009   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.953036   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.952973   81116 retry.go:31] will retry after 281.728141ms: waiting for machine to come up
	I0814 17:37:11.236576   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.237153   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.237192   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.237079   81116 retry.go:31] will retry after 341.673836ms: waiting for machine to come up
	I0814 17:37:10.401790   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:11.400713   79521 node_ready.go:49] node "embed-certs-309673" has status "Ready":"True"
	I0814 17:37:11.400742   79521 node_ready.go:38] duration metric: took 7.503686271s for node "embed-certs-309673" to be "Ready" ...
	I0814 17:37:11.400755   79521 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:11.408217   79521 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:11.414215   79521 pod_ready.go:92] pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:11.414244   79521 pod_ready.go:81] duration metric: took 5.997997ms for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:11.414256   79521 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:13.420804   79521 pod_ready.go:102] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:10.824020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:10.827965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:10.828426   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:10.828465   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:10.828807   79871 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:10.833261   79871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:10.846928   79871 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-885666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:10.847080   79871 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:37:10.847142   79871 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:10.889355   79871 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:37:10.889453   79871 ssh_runner.go:195] Run: which lz4
	I0814 17:37:10.894405   79871 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:37:10.898992   79871 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:10.899029   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:37:12.155402   79871 crio.go:462] duration metric: took 1.261016682s to copy over tarball
	I0814 17:37:12.155485   79871 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:14.344118   79871 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18859644s)
	I0814 17:37:14.344162   79871 crio.go:469] duration metric: took 2.188726026s to extract the tarball
	I0814 17:37:14.344173   79871 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:14.380317   79871 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:14.428289   79871 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:37:14.428312   79871 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:37:14.428326   79871 kubeadm.go:934] updating node { 192.168.50.184 8444 v1.31.0 crio true true} ...
	I0814 17:37:14.428422   79871 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-885666 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:14.428491   79871 ssh_runner.go:195] Run: crio config
	I0814 17:37:14.475385   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:37:14.475416   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:14.475433   79871 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:14.475464   79871 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-885666 NodeName:default-k8s-diff-port-885666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:37:14.475645   79871 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-885666"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:14.475712   79871 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:37:14.485148   79871 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:14.485206   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:14.494161   79871 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0814 17:37:14.511050   79871 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:14.526395   79871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0814 17:37:14.543061   79871 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:14.546747   79871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:14.558022   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:14.671818   79871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:14.688541   79871 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666 for IP: 192.168.50.184
	I0814 17:37:14.688583   79871 certs.go:194] generating shared ca certs ...
	I0814 17:37:14.688609   79871 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:14.688823   79871 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:14.688889   79871 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:14.688903   79871 certs.go:256] generating profile certs ...
	I0814 17:37:14.689020   79871 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/client.key
	I0814 17:37:14.689132   79871 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.key.690c84bc
	I0814 17:37:14.689182   79871 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.key
	I0814 17:37:14.689310   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:14.689367   79871 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:14.689385   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:14.689422   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:14.689453   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:14.689479   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:14.689524   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:14.690168   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:14.717906   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:14.759373   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:14.809775   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:14.834875   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 17:37:14.857860   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:14.886813   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:14.909803   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:14.935075   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:14.959759   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:14.985877   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:15.008456   79871 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:15.025602   79871 ssh_runner.go:195] Run: openssl version
	I0814 17:37:15.031392   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:15.041931   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.046475   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.046531   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.052377   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:15.063000   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:15.073463   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.078411   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.078471   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.083835   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:15.093753   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:15.103876   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.108487   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.108559   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.114104   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:15.124285   79871 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:15.128515   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:15.134223   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:15.139700   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:15.145537   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:15.151287   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:15.156766   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:15.162149   79871 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-885666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:15.162256   79871 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:15.162314   79871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:15.198745   79871 cri.go:89] found id: ""
	I0814 17:37:15.198814   79871 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:15.212198   79871 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:15.212216   79871 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:15.212256   79871 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:15.224275   79871 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:15.225218   79871 kubeconfig.go:125] found "default-k8s-diff-port-885666" server: "https://192.168.50.184:8444"
	I0814 17:37:15.227291   79871 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:15.237448   79871 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.184
	I0814 17:37:15.237494   79871 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:15.237509   79871 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:15.237563   79871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:15.281593   79871 cri.go:89] found id: ""
	I0814 17:37:15.281662   79871 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:15.298596   79871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:15.308702   79871 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:15.308723   79871 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:15.308779   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 17:37:15.318348   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:15.318409   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:15.330049   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 17:37:15.341283   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:15.341373   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:15.350584   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 17:37:15.361658   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:15.361718   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:15.373526   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 17:37:15.382360   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:15.382432   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:15.392477   79871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:15.402387   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:15.528954   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:11.580887   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.581466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.581500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.581392   81116 retry.go:31] will retry after 514.448726ms: waiting for machine to come up
	I0814 17:37:12.098137   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.098652   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.098740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.098642   81116 retry.go:31] will retry after 649.302617ms: waiting for machine to come up
	I0814 17:37:12.749349   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.749777   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.749803   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.749736   81116 retry.go:31] will retry after 897.486278ms: waiting for machine to come up
	I0814 17:37:13.649145   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:13.649666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:13.649698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:13.649621   81116 retry.go:31] will retry after 1.017213079s: waiting for machine to come up
	I0814 17:37:14.669187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:14.669715   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:14.669740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:14.669679   81116 retry.go:31] will retry after 1.014709613s: waiting for machine to come up
	I0814 17:37:15.685748   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:15.686269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:15.686299   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:15.686217   81116 retry.go:31] will retry after 1.476940798s: waiting for machine to come up
	I0814 17:37:15.422067   79521 pod_ready.go:102] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:16.421689   79521 pod_ready.go:92] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.421715   79521 pod_ready.go:81] duration metric: took 5.007451471s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.421724   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.426620   79521 pod_ready.go:92] pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.426644   79521 pod_ready.go:81] duration metric: took 4.912475ms for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.426657   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.430754   79521 pod_ready.go:92] pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.430776   79521 pod_ready.go:81] duration metric: took 4.110475ms for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.430787   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.434469   79521 pod_ready.go:92] pod "kube-proxy-z8x9t" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.434487   79521 pod_ready.go:81] duration metric: took 3.693253ms for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.434498   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.438294   79521 pod_ready.go:92] pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.438314   79521 pod_ready.go:81] duration metric: took 3.80298ms for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.438346   79521 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:18.445838   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:16.453075   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.676680   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.741803   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.831091   79871 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:16.831186   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:17.332193   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:17.831346   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:18.331620   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:18.832011   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:19.331528   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:19.348083   79871 api_server.go:72] duration metric: took 2.516990388s to wait for apiserver process to appear ...
	I0814 17:37:19.348119   79871 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:37:19.348144   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:17.164541   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:17.165093   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:17.165122   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:17.165017   81116 retry.go:31] will retry after 1.644726601s: waiting for machine to come up
	I0814 17:37:18.811628   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:18.812199   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:18.812224   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:18.812132   81116 retry.go:31] will retry after 2.740531885s: waiting for machine to come up
	I0814 17:37:21.576628   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:21.576657   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:21.576672   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:21.601355   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:21.601389   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:21.848481   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:21.855499   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:21.855530   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:22.349158   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:22.353345   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:22.353368   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:22.848954   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:22.853912   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 200:
	ok
	I0814 17:37:22.865096   79871 api_server.go:141] control plane version: v1.31.0
	I0814 17:37:22.865127   79871 api_server.go:131] duration metric: took 3.516999004s to wait for apiserver health ...
	I0814 17:37:22.865138   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:37:22.865146   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:22.866812   79871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:37:20.446123   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:22.446518   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:24.945729   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:22.867939   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:37:22.881586   79871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:37:22.899815   79871 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:37:22.910873   79871 system_pods.go:59] 8 kube-system pods found
	I0814 17:37:22.910928   79871 system_pods.go:61] "coredns-6f6b679f8f-mxc9v" [d1b9d422-faff-4709-a375-f8783e75e18c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:37:22.910946   79871 system_pods.go:61] "etcd-default-k8s-diff-port-885666" [a5473465-a1c1-4413-8e77-74fb1eb398a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:37:22.910956   79871 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-885666" [06c53e48-b156-42b1-b381-818f75821196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:37:22.910966   79871 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-885666" [18a2d7fb-4e18-4880-8812-63d25934699b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:37:22.910977   79871 system_pods.go:61] "kube-proxy-4rrff" [14453cc8-da7d-4dd4-b7fa-89a26dbbf23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:37:22.910995   79871 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-885666" [f0455f16-9a3e-4ede-8524-f701b1ab4ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:37:22.911005   79871 system_pods.go:61] "metrics-server-6867b74b74-qtzm8" [04c797ec-2e38-42a7-a023-5f60c451f780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:37:22.911020   79871 system_pods.go:61] "storage-provisioner" [88c2e8f0-0706-494a-8e83-0ede8f129040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:37:22.911032   79871 system_pods.go:74] duration metric: took 11.192968ms to wait for pod list to return data ...
	I0814 17:37:22.911044   79871 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:37:22.915096   79871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:37:22.915128   79871 node_conditions.go:123] node cpu capacity is 2
	I0814 17:37:22.915140   79871 node_conditions.go:105] duration metric: took 4.087198ms to run NodePressure ...
	I0814 17:37:22.915165   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:23.204612   79871 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:37:23.209643   79871 kubeadm.go:739] kubelet initialised
	I0814 17:37:23.209665   79871 kubeadm.go:740] duration metric: took 5.023123ms waiting for restarted kubelet to initialise ...
	I0814 17:37:23.209673   79871 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:23.215957   79871 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.221969   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.221993   79871 pod_ready.go:81] duration metric: took 6.011053ms for pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.222008   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.222014   79871 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.227119   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.227147   79871 pod_ready.go:81] duration metric: took 5.125006ms for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.227157   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.227163   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.231297   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.231321   79871 pod_ready.go:81] duration metric: took 4.149023ms for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.231346   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.231355   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:25.239956   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:21.555057   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:21.555530   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:21.555562   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:21.555484   81116 retry.go:31] will retry after 3.159225533s: waiting for machine to come up
	I0814 17:37:24.716173   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:24.716482   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:24.716507   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:24.716451   81116 retry.go:31] will retry after 3.32732131s: waiting for machine to come up
	I0814 17:37:29.512066   79367 start.go:364] duration metric: took 55.26941078s to acquireMachinesLock for "no-preload-545149"
	I0814 17:37:29.512115   79367 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:29.512123   79367 fix.go:54] fixHost starting: 
	I0814 17:37:29.512539   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:29.512574   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:29.529625   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0814 17:37:29.530074   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:29.530558   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:37:29.530585   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:29.530930   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:29.531149   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:29.531291   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:37:29.532999   79367 fix.go:112] recreateIfNeeded on no-preload-545149: state=Stopped err=<nil>
	I0814 17:37:29.533049   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	W0814 17:37:29.533224   79367 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:29.535000   79367 out.go:177] * Restarting existing kvm2 VM for "no-preload-545149" ...
	I0814 17:37:27.445398   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:29.945246   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:27.737698   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:29.737890   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:28.045690   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046151   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has current primary IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046177   80228 main.go:141] libmachine: (old-k8s-version-505584) Found IP for machine: 192.168.72.49
	I0814 17:37:28.046192   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserving static IP address...
	I0814 17:37:28.046500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.046524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | skip adding static IP to network mk-old-k8s-version-505584 - found existing host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"}
	I0814 17:37:28.046540   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserved static IP address: 192.168.72.49
	I0814 17:37:28.046559   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting for SSH to be available...
	I0814 17:37:28.046571   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Getting to WaitForSSH function...
	I0814 17:37:28.048709   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049082   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.049106   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049252   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH client type: external
	I0814 17:37:28.049285   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa (-rw-------)
	I0814 17:37:28.049325   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:28.049342   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | About to run SSH command:
	I0814 17:37:28.049356   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | exit 0
	I0814 17:37:28.179844   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:28.180193   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetConfigRaw
	I0814 17:37:28.180865   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.183617   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184074   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.184118   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184367   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:37:28.184641   80228 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:28.184663   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:28.184891   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.187158   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187517   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.187547   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187696   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.187857   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188027   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188178   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.188320   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.188570   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.188587   80228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:28.303564   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:28.303597   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.303831   80228 buildroot.go:166] provisioning hostname "old-k8s-version-505584"
	I0814 17:37:28.303856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.304021   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.306826   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307180   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.307210   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307415   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.307608   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307769   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307915   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.308131   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.308336   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.308354   80228 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-505584 && echo "old-k8s-version-505584" | sudo tee /etc/hostname
	I0814 17:37:28.434224   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-505584
	
	I0814 17:37:28.434261   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.437350   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437633   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.437666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.438077   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438245   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438395   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.438623   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.438832   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.438857   80228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-505584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-505584/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-505584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:28.564784   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:28.564815   80228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:28.564858   80228 buildroot.go:174] setting up certificates
	I0814 17:37:28.564872   80228 provision.go:84] configureAuth start
	I0814 17:37:28.564884   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.565188   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.568217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.568698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.568731   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.569013   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.571364   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571780   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.571805   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571961   80228 provision.go:143] copyHostCerts
	I0814 17:37:28.572023   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:28.572032   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:28.572076   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:28.572176   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:28.572184   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:28.572206   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:28.572275   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:28.572284   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:28.572337   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:28.572435   80228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-505584 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-505584]
	I0814 17:37:28.804798   80228 provision.go:177] copyRemoteCerts
	I0814 17:37:28.804853   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:28.804879   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.807967   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.808302   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808458   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.808690   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.808874   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.809001   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:28.900346   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:28.926959   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 17:37:28.955373   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:28.984436   80228 provision.go:87] duration metric: took 419.552519ms to configureAuth
	I0814 17:37:28.984463   80228 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:28.984630   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:37:28.984713   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.987602   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988077   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.988107   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988237   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.988486   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988641   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988768   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.988986   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.989209   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.989234   80228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:29.262630   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:29.262656   80228 machine.go:97] duration metric: took 1.078000469s to provisionDockerMachine
	I0814 17:37:29.262669   80228 start.go:293] postStartSetup for "old-k8s-version-505584" (driver="kvm2")
	I0814 17:37:29.262683   80228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:29.262704   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.263051   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:29.263082   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.266020   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.266495   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266720   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.266919   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.267093   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.267253   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.354027   80228 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:29.358196   80228 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:29.358224   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:29.358304   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:29.358416   80228 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:29.358543   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:29.367802   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:29.392802   80228 start.go:296] duration metric: took 130.117007ms for postStartSetup
	I0814 17:37:29.392846   80228 fix.go:56] duration metric: took 20.068754346s for fixHost
	I0814 17:37:29.392871   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.395638   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396032   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.396064   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396251   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.396516   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396698   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396893   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.397155   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:29.397326   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:29.397340   80228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:29.511889   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657049.468340520
	
	I0814 17:37:29.511913   80228 fix.go:216] guest clock: 1723657049.468340520
	I0814 17:37:29.511923   80228 fix.go:229] Guest: 2024-08-14 17:37:29.46834052 +0000 UTC Remote: 2024-08-14 17:37:29.392851248 +0000 UTC m=+223.104093144 (delta=75.489272ms)
	I0814 17:37:29.511983   80228 fix.go:200] guest clock delta is within tolerance: 75.489272ms
	I0814 17:37:29.511996   80228 start.go:83] releasing machines lock for "old-k8s-version-505584", held for 20.187937886s
	I0814 17:37:29.512031   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.512333   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:29.515152   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515487   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.515524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515735   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516299   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516497   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516643   80228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:29.516723   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.516727   80228 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:29.516752   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.519600   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.519751   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520017   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520045   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520164   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520192   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520341   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520423   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520520   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520588   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520646   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520718   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.520780   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.642824   80228 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:29.648834   80228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:29.795482   80228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:29.801407   80228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:29.801486   80228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:29.821662   80228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:29.821684   80228 start.go:495] detecting cgroup driver to use...
	I0814 17:37:29.821761   80228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:29.843829   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:29.859505   80228 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:29.859590   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:29.873790   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:29.889295   80228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:30.035909   80228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:30.209521   80228 docker.go:233] disabling docker service ...
	I0814 17:37:30.209574   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:30.226980   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:30.241678   80228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:30.375116   80228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:30.498357   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:30.512272   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:30.533062   80228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 17:37:30.533130   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.543595   80228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:30.543664   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.554139   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.564417   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.574627   80228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:30.584957   80228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:30.594667   80228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:30.594720   80228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:30.606826   80228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:30.621990   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:30.758992   80228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:30.915494   80228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:30.915572   80228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:30.920692   80228 start.go:563] Will wait 60s for crictl version
	I0814 17:37:30.920767   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:30.924365   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:30.964662   80228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:30.964756   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:30.995534   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:31.025400   80228 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 17:37:31.026943   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:31.030217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030630   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:31.030665   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030943   80228 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:31.034960   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:31.047742   80228 kubeadm.go:883] updating cluster {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:31.047864   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:37:31.047926   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:31.092203   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:31.092278   80228 ssh_runner.go:195] Run: which lz4
	I0814 17:37:31.096471   80228 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:37:31.100610   80228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:31.100642   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 17:37:29.536310   79367 main.go:141] libmachine: (no-preload-545149) Calling .Start
	I0814 17:37:29.536513   79367 main.go:141] libmachine: (no-preload-545149) Ensuring networks are active...
	I0814 17:37:29.537431   79367 main.go:141] libmachine: (no-preload-545149) Ensuring network default is active
	I0814 17:37:29.537935   79367 main.go:141] libmachine: (no-preload-545149) Ensuring network mk-no-preload-545149 is active
	I0814 17:37:29.538468   79367 main.go:141] libmachine: (no-preload-545149) Getting domain xml...
	I0814 17:37:29.539383   79367 main.go:141] libmachine: (no-preload-545149) Creating domain...
	I0814 17:37:30.863155   79367 main.go:141] libmachine: (no-preload-545149) Waiting to get IP...
	I0814 17:37:30.864257   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:30.864722   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:30.864812   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:30.864695   81248 retry.go:31] will retry after 244.342973ms: waiting for machine to come up
	I0814 17:37:31.111211   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.111784   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.111806   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.111735   81248 retry.go:31] will retry after 277.033145ms: waiting for machine to come up
	I0814 17:37:31.390071   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.390511   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.390554   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.390429   81248 retry.go:31] will retry after 320.225451ms: waiting for machine to come up
	I0814 17:37:31.949069   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:34.445833   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:31.741110   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:33.239418   79871 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:33.239449   79871 pod_ready.go:81] duration metric: took 10.008084028s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.239462   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4rrff" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.244600   79871 pod_ready.go:92] pod "kube-proxy-4rrff" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:33.244628   79871 pod_ready.go:81] duration metric: took 5.157296ms for pod "kube-proxy-4rrff" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.244648   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:35.253613   79871 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:35.253643   79871 pod_ready.go:81] duration metric: took 2.008985731s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:35.253657   79871 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:32.582064   80228 crio.go:462] duration metric: took 1.485645107s to copy over tarball
	I0814 17:37:32.582151   80228 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:35.556765   80228 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.974581109s)
	I0814 17:37:35.556795   80228 crio.go:469] duration metric: took 2.9747s to extract the tarball
	I0814 17:37:35.556802   80228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:35.599129   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:35.632752   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:35.632775   80228 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:35.632831   80228 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.632864   80228 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.632892   80228 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 17:37:35.632911   80228 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.632944   80228 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.633112   80228 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634793   80228 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.634821   80228 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 17:37:35.634824   80228 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634885   80228 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.634910   80228 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.635009   80228 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.635082   80228 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.635265   80228 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.905566   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 17:37:35.953168   80228 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 17:37:35.953210   80228 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 17:37:35.953260   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:35.957961   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:35.978859   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.978920   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.988556   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.993281   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.997933   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.018501   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.043527   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.146739   80228 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 17:37:36.146812   80228 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 17:37:36.146832   80228 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.146852   80228 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.146881   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.146891   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163832   80228 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 17:37:36.163856   80228 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 17:37:36.163877   80228 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.163889   80228 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.163923   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163924   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163927   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.172482   80228 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 17:37:36.172530   80228 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.172599   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.195157   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.195208   80228 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 17:37:36.195165   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.195242   80228 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.195245   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.195277   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.237454   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 17:37:36.237519   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.237549   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.292614   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.306771   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.306794   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:31.712067   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.712601   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.712630   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.712575   81248 retry.go:31] will retry after 546.687472ms: waiting for machine to come up
	I0814 17:37:32.261457   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:32.261921   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:32.261950   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:32.261854   81248 retry.go:31] will retry after 484.345236ms: waiting for machine to come up
	I0814 17:37:32.747475   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:32.748118   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:32.748149   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:32.748060   81248 retry.go:31] will retry after 899.564198ms: waiting for machine to come up
	I0814 17:37:33.649684   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:33.650206   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:33.650234   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:33.650155   81248 retry.go:31] will retry after 1.039934932s: waiting for machine to come up
	I0814 17:37:34.691741   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:34.692197   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:34.692220   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:34.692169   81248 retry.go:31] will retry after 925.402437ms: waiting for machine to come up
	I0814 17:37:35.618737   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:35.619169   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:35.619200   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:35.619102   81248 retry.go:31] will retry after 1.401066913s: waiting for machine to come up
	I0814 17:37:36.447039   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:38.945321   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:37.260912   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:39.759967   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:36.321893   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.339836   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.339929   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.426588   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.426659   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.433149   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.469717   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:36.477512   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.477583   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.477761   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.538635   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 17:37:36.557712   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 17:37:36.558304   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.700263   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 17:37:36.700333   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 17:37:36.700410   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 17:37:36.700481   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 17:37:36.700527   80228 cache_images.go:92] duration metric: took 1.067740607s to LoadCachedImages
	W0814 17:37:36.700602   80228 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 17:37:36.700623   80228 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0814 17:37:36.700757   80228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-505584 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:36.700846   80228 ssh_runner.go:195] Run: crio config
	I0814 17:37:36.748814   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:37:36.748843   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:36.748860   80228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:36.748885   80228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-505584 NodeName:old-k8s-version-505584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 17:37:36.749053   80228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-505584"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:36.749129   80228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 17:37:36.760058   80228 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:36.760131   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:36.769388   80228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0814 17:37:36.786594   80228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:36.807695   80228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0814 17:37:36.825609   80228 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:36.829296   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:36.841882   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:36.976199   80228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:36.993682   80228 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584 for IP: 192.168.72.49
	I0814 17:37:36.993707   80228 certs.go:194] generating shared ca certs ...
	I0814 17:37:36.993728   80228 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:36.993924   80228 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:36.993985   80228 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:36.993998   80228 certs.go:256] generating profile certs ...
	I0814 17:37:36.994115   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.key
	I0814 17:37:36.994206   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key.c375770f
	I0814 17:37:36.994261   80228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key
	I0814 17:37:36.994428   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:36.994478   80228 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:36.994492   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:36.994522   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:36.994557   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:36.994603   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:36.994661   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:36.995558   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:37.043910   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:37.073810   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:37.097939   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:37.124449   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 17:37:37.154747   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:37.179474   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:37.204471   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:37.228579   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:37.266929   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:37.292912   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:37.316803   80228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:37.332934   80228 ssh_runner.go:195] Run: openssl version
	I0814 17:37:37.339316   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:37.349829   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354230   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354297   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.360089   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:37.371417   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:37.381777   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385894   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385955   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.391826   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:37.402049   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:37.412038   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416395   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416448   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.421794   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:37.431868   80228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:37.436305   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:37.442838   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:37.448991   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:37.454769   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:37.460381   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:37.466406   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:37.472466   80228 kubeadm.go:392] StartCluster: {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:37.472584   80228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:37.472636   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.508256   80228 cri.go:89] found id: ""
	I0814 17:37:37.508323   80228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:37.518824   80228 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:37.518856   80228 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:37.518941   80228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:37.529328   80228 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:37.530242   80228 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-505584" does not appear in /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:37.530890   80228 kubeconfig.go:62] /home/jenkins/minikube-integration/19446-13977/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-505584" cluster setting kubeconfig missing "old-k8s-version-505584" context setting]
	I0814 17:37:37.531922   80228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:37.539843   80228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:37.550012   80228 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.49
	I0814 17:37:37.550051   80228 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:37.550063   80228 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:37.550113   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.590226   80228 cri.go:89] found id: ""
	I0814 17:37:37.590307   80228 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:37.606242   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:37.615340   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:37.615377   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:37.615436   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:37:37.623996   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:37.624063   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:37.633356   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:37:37.642888   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:37.642958   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:37.652532   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.661607   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:37.661679   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.670876   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:37:37.679780   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:37.679846   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:37.690044   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:37.699617   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:37.813799   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.666487   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.901307   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.029983   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.139056   80228 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:39.139133   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:39.639191   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.139315   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.639292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:41.139421   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:37.021766   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:37.022253   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:37.022282   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:37.022216   81248 retry.go:31] will retry after 2.184222941s: waiting for machine to come up
	I0814 17:37:39.209777   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:39.210239   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:39.210265   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:39.210203   81248 retry.go:31] will retry after 2.903962154s: waiting for machine to come up
	I0814 17:37:41.445413   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:43.949816   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:41.760985   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:44.260273   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:41.639312   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.139387   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.639981   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.139499   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.639391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.139425   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.639677   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.139466   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.639426   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:46.140065   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.116682   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:42.117116   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:42.117154   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:42.117086   81248 retry.go:31] will retry after 3.387467992s: waiting for machine to come up
	I0814 17:37:45.505680   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:45.506034   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:45.506056   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:45.505986   81248 retry.go:31] will retry after 2.944973353s: waiting for machine to come up
	I0814 17:37:46.443772   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:48.445046   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:46.759297   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:49.260881   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:46.640043   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.139213   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.639848   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.140080   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.639961   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.139473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.639212   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.139781   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.640028   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.140140   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.452516   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.453064   79367 main.go:141] libmachine: (no-preload-545149) Found IP for machine: 192.168.39.162
	I0814 17:37:48.453092   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has current primary IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.453099   79367 main.go:141] libmachine: (no-preload-545149) Reserving static IP address...
	I0814 17:37:48.453513   79367 main.go:141] libmachine: (no-preload-545149) Reserved static IP address: 192.168.39.162
	I0814 17:37:48.453564   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "no-preload-545149", mac: "52:54:00:d0:bd:d7", ip: "192.168.39.162"} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.453578   79367 main.go:141] libmachine: (no-preload-545149) Waiting for SSH to be available...
	I0814 17:37:48.453608   79367 main.go:141] libmachine: (no-preload-545149) DBG | skip adding static IP to network mk-no-preload-545149 - found existing host DHCP lease matching {name: "no-preload-545149", mac: "52:54:00:d0:bd:d7", ip: "192.168.39.162"}
	I0814 17:37:48.453630   79367 main.go:141] libmachine: (no-preload-545149) DBG | Getting to WaitForSSH function...
	I0814 17:37:48.455959   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.456279   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.456304   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.456429   79367 main.go:141] libmachine: (no-preload-545149) DBG | Using SSH client type: external
	I0814 17:37:48.456449   79367 main.go:141] libmachine: (no-preload-545149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa (-rw-------)
	I0814 17:37:48.456490   79367 main.go:141] libmachine: (no-preload-545149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:48.456506   79367 main.go:141] libmachine: (no-preload-545149) DBG | About to run SSH command:
	I0814 17:37:48.456520   79367 main.go:141] libmachine: (no-preload-545149) DBG | exit 0
	I0814 17:37:48.579489   79367 main.go:141] libmachine: (no-preload-545149) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:48.579924   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetConfigRaw
	I0814 17:37:48.580615   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:48.583202   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.583545   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.583592   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.583857   79367 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/config.json ...
	I0814 17:37:48.584093   79367 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:48.584113   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:48.584340   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.586712   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.587086   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.587107   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.587259   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.587441   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.587593   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.587720   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.587869   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.588029   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.588040   79367 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:48.691255   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:48.691290   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.691555   79367 buildroot.go:166] provisioning hostname "no-preload-545149"
	I0814 17:37:48.691593   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.691798   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.694492   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.694768   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.694797   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.694907   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.695084   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.695248   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.695396   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.695556   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.695777   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.695798   79367 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-545149 && echo "no-preload-545149" | sudo tee /etc/hostname
	I0814 17:37:48.813509   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-545149
	
	I0814 17:37:48.813537   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.816304   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.816698   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.816732   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.816884   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.817057   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.817265   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.817393   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.817586   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.817809   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.817836   79367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-545149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-545149/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-545149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:48.927482   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:48.927512   79367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:48.927540   79367 buildroot.go:174] setting up certificates
	I0814 17:37:48.927551   79367 provision.go:84] configureAuth start
	I0814 17:37:48.927567   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.927831   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:48.930532   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.930879   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.930906   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.931104   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.933420   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.933754   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.933783   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.933893   79367 provision.go:143] copyHostCerts
	I0814 17:37:48.933968   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:48.933979   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:48.934040   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:48.934146   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:48.934156   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:48.934186   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:48.934262   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:48.934271   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:48.934302   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:48.934377   79367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.no-preload-545149 san=[127.0.0.1 192.168.39.162 localhost minikube no-preload-545149]
	I0814 17:37:49.287517   79367 provision.go:177] copyRemoteCerts
	I0814 17:37:49.287580   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:49.287607   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.290280   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.290667   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.290690   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.290856   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.291063   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.291180   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.291304   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.374716   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:49.398652   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:37:49.422885   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:37:49.448774   79367 provision.go:87] duration metric: took 521.207251ms to configureAuth
	I0814 17:37:49.448800   79367 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:49.448972   79367 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:49.449064   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.452034   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.452373   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.452403   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.452604   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.452859   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.453058   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.453217   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.453388   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:49.453579   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:49.453601   79367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:49.711896   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:49.711922   79367 machine.go:97] duration metric: took 1.127817152s to provisionDockerMachine
	I0814 17:37:49.711933   79367 start.go:293] postStartSetup for "no-preload-545149" (driver="kvm2")
	I0814 17:37:49.711942   79367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:49.711977   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.712299   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:49.712324   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.714736   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.715059   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.715097   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.715232   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.715428   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.715616   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.715769   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.797746   79367 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:49.801764   79367 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:49.801794   79367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:49.801863   79367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:49.801960   79367 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:49.802081   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:49.811506   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:49.834762   79367 start.go:296] duration metric: took 122.81358ms for postStartSetup
	I0814 17:37:49.834812   79367 fix.go:56] duration metric: took 20.32268926s for fixHost
	I0814 17:37:49.834837   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.837418   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.837739   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.837768   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.837903   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.838114   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.838292   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.838438   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.838643   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:49.838838   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:49.838850   79367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:49.944936   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657069.919883473
	
	I0814 17:37:49.944965   79367 fix.go:216] guest clock: 1723657069.919883473
	I0814 17:37:49.944975   79367 fix.go:229] Guest: 2024-08-14 17:37:49.919883473 +0000 UTC Remote: 2024-08-14 17:37:49.834818813 +0000 UTC m=+358.184638535 (delta=85.06466ms)
	I0814 17:37:49.945005   79367 fix.go:200] guest clock delta is within tolerance: 85.06466ms
	I0814 17:37:49.945017   79367 start.go:83] releasing machines lock for "no-preload-545149", held for 20.432923283s
	I0814 17:37:49.945044   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.945291   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:49.947847   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.948269   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.948295   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.948500   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949082   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949262   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949347   79367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:49.949406   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.949517   79367 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:49.949541   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.952281   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952328   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952667   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.952692   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952833   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.952836   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.952895   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.953037   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.953075   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.953201   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.953212   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.953400   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.953412   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.953543   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:50.072094   79367 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:50.080210   79367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:50.227736   79367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:50.233533   79367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:50.233597   79367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:50.249452   79367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:50.249474   79367 start.go:495] detecting cgroup driver to use...
	I0814 17:37:50.249552   79367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:50.265740   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:50.278769   79367 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:50.278833   79367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:50.291625   79367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:50.304529   79367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:50.415405   79367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:50.556016   79367 docker.go:233] disabling docker service ...
	I0814 17:37:50.556092   79367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:50.570197   79367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:50.583068   79367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:50.721414   79367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:50.850890   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:50.864530   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:50.882021   79367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:37:50.882097   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.891490   79367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:50.891564   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.901437   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.911316   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.920935   79367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:50.930571   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.940106   79367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.957351   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.967222   79367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:50.976120   79367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:50.976170   79367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:50.990922   79367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:51.000086   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:51.116655   79367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:51.246182   79367 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:51.246265   79367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:51.250838   79367 start.go:563] Will wait 60s for crictl version
	I0814 17:37:51.250900   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.254633   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:51.299890   79367 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:51.299992   79367 ssh_runner.go:195] Run: crio --version
	I0814 17:37:51.328292   79367 ssh_runner.go:195] Run: crio --version
	I0814 17:37:51.360415   79367 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:37:51.361536   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:51.364443   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:51.364884   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:51.364914   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:51.365112   79367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:51.368941   79367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:51.380519   79367 kubeadm.go:883] updating cluster {Name:no-preload-545149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:51.380668   79367 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:37:51.380705   79367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:51.413314   79367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:37:51.413346   79367 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:51.413417   79367 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.413435   79367 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.413452   79367 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.413395   79367 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:51.413473   79367 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 17:37:51.413440   79367 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.413521   79367 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.413529   79367 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.414920   79367 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:51.414940   79367 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 17:37:51.414983   79367 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.415006   79367 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.415010   79367 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.414982   79367 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.415070   79367 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.415100   79367 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.664642   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.686463   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:50.445457   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:52.945915   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:51.762809   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:54.259593   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:51.639969   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.139918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.639403   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.139921   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.640224   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.140272   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.639242   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.139908   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.639233   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:56.139955   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.699627   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0814 17:37:51.718031   79367 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0814 17:37:51.718085   79367 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.718133   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.736370   79367 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0814 17:37:51.736408   79367 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.736454   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.779229   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.800986   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.819343   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.841240   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.853614   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.853650   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.853753   79367 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0814 17:37:51.853798   79367 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.853842   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.866717   79367 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0814 17:37:51.866757   79367 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.866807   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.908593   79367 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0814 17:37:51.908644   79367 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.908701   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.936701   79367 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0814 17:37:51.936737   79367 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.936784   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.944882   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.944962   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.944983   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.945008   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.945070   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.945089   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.063281   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:52.080543   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:52.080556   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.080574   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:52.080629   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:52.080647   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:52.126573   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:52.205600   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:52.205623   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.236617   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 17:37:52.236678   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:52.236757   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.237083   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 17:37:52.237161   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:52.238804   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0814 17:37:52.238891   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:52.294945   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 17:37:52.295018   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 17:37:52.295064   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:37:52.295103   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0814 17:37:52.295127   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.295189   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.295110   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:37:52.302365   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 17:37:52.302388   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0814 17:37:52.302423   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0814 17:37:52.302472   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:37:52.306933   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0814 17:37:52.307107   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0814 17:37:52.309298   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:54.271998   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.976780716s)
	I0814 17:37:54.272032   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0814 17:37:54.272053   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:54.272063   79367 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.962736886s)
	I0814 17:37:54.272100   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:54.271998   79367 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (1.969503874s)
	I0814 17:37:54.272150   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0814 17:37:54.272105   79367 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0814 17:37:54.272192   79367 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:54.272250   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:56.021236   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.749108117s)
	I0814 17:37:56.021281   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0814 17:37:56.021288   79367 ssh_runner.go:235] Completed: which crictl: (1.749013682s)
	I0814 17:37:56.021309   79367 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:56.021370   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:56.021386   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:55.445017   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:57.445204   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:59.945329   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:56.260666   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:58.760907   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:56.639799   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.140184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.639918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.139310   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.639393   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.140139   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.639614   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.139472   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.139946   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.830150   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.808753337s)
	I0814 17:37:59.830181   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0814 17:37:59.830205   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:37:59.830208   79367 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.80880721s)
	I0814 17:37:59.830253   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:59.830255   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:38:02.444320   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:04.444667   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:01.260951   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:03.759895   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:01.639422   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.139858   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.639412   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.140047   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.640170   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.139779   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.639728   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.139343   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.640249   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.139448   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.796675   79367 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.966400982s)
	I0814 17:38:01.796690   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.966414051s)
	I0814 17:38:01.796708   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0814 17:38:01.796735   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:38:01.796757   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:38:01.796796   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:38:01.841898   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 17:38:01.841994   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:03.571965   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.775142217s)
	I0814 17:38:03.571991   79367 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.729967853s)
	I0814 17:38:03.572002   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0814 17:38:03.572019   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0814 17:38:03.572028   79367 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:03.572079   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:04.422670   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 17:38:04.422705   79367 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:38:04.422760   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:38:06.277419   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.854632861s)
	I0814 17:38:06.277457   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0814 17:38:06.277488   79367 cache_images.go:123] Successfully loaded all cached images
	I0814 17:38:06.277494   79367 cache_images.go:92] duration metric: took 14.864134758s to LoadCachedImages
	I0814 17:38:06.277504   79367 kubeadm.go:934] updating node { 192.168.39.162 8443 v1.31.0 crio true true} ...
	I0814 17:38:06.277628   79367 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-545149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:38:06.277690   79367 ssh_runner.go:195] Run: crio config
	I0814 17:38:06.337971   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:38:06.337990   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:38:06.337999   79367 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:38:06.338019   79367 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-545149 NodeName:no-preload-545149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:38:06.338148   79367 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-545149"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:38:06.338222   79367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:38:06.348156   79367 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:38:06.348219   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:38:06.356784   79367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0814 17:38:06.372439   79367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:38:06.388610   79367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0814 17:38:06.405084   79367 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I0814 17:38:06.408753   79367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:38:06.420313   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:38:06.546115   79367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:38:06.563747   79367 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149 for IP: 192.168.39.162
	I0814 17:38:06.563776   79367 certs.go:194] generating shared ca certs ...
	I0814 17:38:06.563799   79367 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:38:06.563973   79367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:38:06.564035   79367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:38:06.564058   79367 certs.go:256] generating profile certs ...
	I0814 17:38:06.564150   79367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/client.key
	I0814 17:38:06.564207   79367 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.key.d0704694
	I0814 17:38:06.564241   79367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.key
	I0814 17:38:06.564349   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:38:06.564377   79367 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:38:06.564386   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:38:06.564411   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:38:06.564437   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:38:06.564459   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:38:06.564497   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:38:06.565269   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:38:06.592622   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:38:06.619148   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:38:06.646169   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:38:06.682399   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 17:38:06.446354   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:08.948005   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:05.760991   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:08.260189   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:10.260816   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:06.639416   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.140176   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.639682   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.140063   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.640014   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.139435   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.639256   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.139949   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.640283   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:11.139394   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.714195   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:38:06.750431   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:38:06.772702   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:38:06.793932   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:38:06.815601   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:38:06.837187   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:38:06.858175   79367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:38:06.876187   79367 ssh_runner.go:195] Run: openssl version
	I0814 17:38:06.881909   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:38:06.892057   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.896191   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.896251   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.901630   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:38:06.910888   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:38:06.920223   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.924480   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.924527   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.929591   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:38:06.939072   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:38:06.949970   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.954288   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.954339   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.959551   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:38:06.969505   79367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:38:06.973905   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:38:06.980211   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:38:06.986779   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:38:06.992220   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:38:06.997446   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:38:07.002681   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:38:07.008037   79367 kubeadm.go:392] StartCluster: {Name:no-preload-545149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:38:07.008131   79367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:38:07.008188   79367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:38:07.043144   79367 cri.go:89] found id: ""
	I0814 17:38:07.043214   79367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:38:07.052215   79367 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:38:07.052233   79367 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:38:07.052281   79367 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:38:07.060618   79367 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:38:07.061557   79367 kubeconfig.go:125] found "no-preload-545149" server: "https://192.168.39.162:8443"
	I0814 17:38:07.063554   79367 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:38:07.072026   79367 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I0814 17:38:07.072064   79367 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:38:07.072075   79367 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:38:07.072117   79367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:38:07.109349   79367 cri.go:89] found id: ""
	I0814 17:38:07.109412   79367 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:38:07.126888   79367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:38:07.138272   79367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:38:07.138293   79367 kubeadm.go:157] found existing configuration files:
	
	I0814 17:38:07.138367   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:38:07.147160   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:38:07.147220   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:38:07.156664   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:38:07.165122   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:38:07.165167   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:38:07.173478   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:38:07.181391   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:38:07.181449   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:38:07.189750   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:38:07.198215   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:38:07.198274   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:38:07.207384   79367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:38:07.216034   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:07.337710   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.227720   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.455979   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.521250   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.654574   79367 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:38:08.654684   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.155639   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.655182   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.696193   79367 api_server.go:72] duration metric: took 1.041620068s to wait for apiserver process to appear ...
	I0814 17:38:09.696223   79367 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:38:09.696241   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:09.696703   79367 api_server.go:269] stopped: https://192.168.39.162:8443/healthz: Get "https://192.168.39.162:8443/healthz": dial tcp 192.168.39.162:8443: connect: connection refused
	I0814 17:38:10.197180   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.389673   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:38:12.389702   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:38:12.389717   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.403106   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:38:12.403138   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:38:12.696486   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.700755   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:38:12.700784   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:38:13.196293   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:13.200564   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:38:13.200592   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:38:13.697253   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:13.705430   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I0814 17:38:13.732816   79367 api_server.go:141] control plane version: v1.31.0
	I0814 17:38:13.732843   79367 api_server.go:131] duration metric: took 4.036614106s to wait for apiserver health ...
	I0814 17:38:13.732852   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:38:13.732860   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:38:13.734904   79367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:38:11.444846   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:13.943583   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:12.759294   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:14.760919   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:11.640107   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.140034   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.639463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.139428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.639575   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.140005   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.639473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.140124   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.639459   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:16.139187   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.736533   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:38:13.756650   79367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:38:13.776947   79367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:38:13.803170   79367 system_pods.go:59] 8 kube-system pods found
	I0814 17:38:13.803214   79367 system_pods.go:61] "coredns-6f6b679f8f-tt46z" [169beaf0-0310-47d5-b212-9a81c6b3df68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:38:13.803228   79367 system_pods.go:61] "etcd-no-preload-545149" [47e22bb4-bedb-433f-ae2e-f281269c6e87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:38:13.803240   79367 system_pods.go:61] "kube-apiserver-no-preload-545149" [37854a66-b05b-49fe-834b-98f724087ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:38:13.803249   79367 system_pods.go:61] "kube-controller-manager-no-preload-545149" [69189ec1-6f8c-4613-bf47-46e101a14ecd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:38:13.803307   79367 system_pods.go:61] "kube-proxy-gfrqp" [2206243d-f6e0-462c-969c-60e192038700] Running
	I0814 17:38:13.803314   79367 system_pods.go:61] "kube-scheduler-no-preload-545149" [0bbd41bd-0a18-486b-b78c-9a0e9efe209a] Running
	I0814 17:38:13.803322   79367 system_pods.go:61] "metrics-server-6867b74b74-8c2cx" [b30f3018-f316-4997-a8fa-ff6c83aa7dd7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:38:13.803341   79367 system_pods.go:61] "storage-provisioner" [635027cc-ac5d-4474-a243-ef48b3580998] Running
	I0814 17:38:13.803349   79367 system_pods.go:74] duration metric: took 26.377795ms to wait for pod list to return data ...
	I0814 17:38:13.803357   79367 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:38:13.814093   79367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:38:13.814120   79367 node_conditions.go:123] node cpu capacity is 2
	I0814 17:38:13.814131   79367 node_conditions.go:105] duration metric: took 10.768606ms to run NodePressure ...
	I0814 17:38:13.814147   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:14.196481   79367 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:38:14.202205   79367 kubeadm.go:739] kubelet initialised
	I0814 17:38:14.202239   79367 kubeadm.go:740] duration metric: took 5.723699ms waiting for restarted kubelet to initialise ...
	I0814 17:38:14.202250   79367 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:38:14.209431   79367 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.215568   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.215597   79367 pod_ready.go:81] duration metric: took 6.13175ms for pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.215610   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.215620   79367 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.227611   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "etcd-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.227647   79367 pod_ready.go:81] duration metric: took 12.016107ms for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.227661   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "etcd-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.227669   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.235095   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "kube-apiserver-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.235130   79367 pod_ready.go:81] duration metric: took 7.452418ms for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.235143   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "kube-apiserver-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.235153   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.244417   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.244447   79367 pod_ready.go:81] duration metric: took 9.283911ms for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.244459   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.244466   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gfrqp" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.999946   79367 pod_ready.go:92] pod "kube-proxy-gfrqp" in "kube-system" namespace has status "Ready":"True"
	I0814 17:38:14.999968   79367 pod_ready.go:81] duration metric: took 755.491905ms for pod "kube-proxy-gfrqp" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.999977   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:15.945421   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:18.444758   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:16.761265   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:19.260117   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:16.639219   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.139463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.639839   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.140251   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.639890   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.139999   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.639652   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.139316   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.639809   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:21.139471   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.005796   79367 pod_ready.go:102] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:19.006769   79367 pod_ready.go:102] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:20.506792   79367 pod_ready.go:92] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:38:20.506815   79367 pod_ready.go:81] duration metric: took 5.50683258s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:20.506825   79367 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:20.445449   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:22.446622   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:24.943859   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:21.760870   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:23.761708   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:21.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.139292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.640151   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.139450   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.639996   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.139447   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.639267   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.139595   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.639451   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:26.140190   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.513577   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:25.012936   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.945216   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:29.444769   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.260276   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:28.263789   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.640120   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.140141   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.640184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.139896   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.140246   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.639895   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.139860   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.639358   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:31.140029   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.014354   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:29.516049   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:31.944967   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:34.444885   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:30.760174   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:33.259870   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:35.260137   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:31.639317   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.140039   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.139240   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.640181   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.139789   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.639297   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.139871   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.639347   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:36.140044   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.013464   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:34.513348   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.513741   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.944347   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:38.945374   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:37.759866   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:39.760334   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.640132   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.139254   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.639457   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.139928   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.639196   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:39.139906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:39.139980   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:39.179494   80228 cri.go:89] found id: ""
	I0814 17:38:39.179524   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.179535   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:39.179543   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:39.179619   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:39.210704   80228 cri.go:89] found id: ""
	I0814 17:38:39.210732   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.210741   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:39.210746   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:39.210796   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:39.247562   80228 cri.go:89] found id: ""
	I0814 17:38:39.247590   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.247597   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:39.247603   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:39.247665   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:39.281456   80228 cri.go:89] found id: ""
	I0814 17:38:39.281480   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.281488   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:39.281494   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:39.281553   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:39.318588   80228 cri.go:89] found id: ""
	I0814 17:38:39.318620   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.318630   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:39.318638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:39.318695   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:39.350270   80228 cri.go:89] found id: ""
	I0814 17:38:39.350294   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.350303   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:39.350311   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:39.350387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:39.382168   80228 cri.go:89] found id: ""
	I0814 17:38:39.382198   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.382209   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:39.382215   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:39.382325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:39.415307   80228 cri.go:89] found id: ""
	I0814 17:38:39.415342   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.415354   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:39.415375   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:39.415388   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:39.469591   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:39.469632   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:39.482909   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:39.482942   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:39.609874   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:39.609906   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:39.609921   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:39.683210   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:39.683253   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:39.013876   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:41.513527   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:41.444286   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:43.444539   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:42.260548   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:44.263171   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:42.222687   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:42.235017   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:42.235088   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:42.285518   80228 cri.go:89] found id: ""
	I0814 17:38:42.285544   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.285553   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:42.285559   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:42.285614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:42.320462   80228 cri.go:89] found id: ""
	I0814 17:38:42.320492   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.320500   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:42.320506   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:42.320594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:42.353484   80228 cri.go:89] found id: ""
	I0814 17:38:42.353515   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.353528   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:42.353537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:42.353614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:42.388122   80228 cri.go:89] found id: ""
	I0814 17:38:42.388152   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.388163   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:42.388171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:42.388239   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:42.420246   80228 cri.go:89] found id: ""
	I0814 17:38:42.420275   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.420285   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:42.420293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:42.420359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:42.454636   80228 cri.go:89] found id: ""
	I0814 17:38:42.454669   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.454680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:42.454687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:42.454749   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:42.494638   80228 cri.go:89] found id: ""
	I0814 17:38:42.494670   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.494679   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:42.494686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:42.494751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:42.532224   80228 cri.go:89] found id: ""
	I0814 17:38:42.532257   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.532269   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:42.532281   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:42.532296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:42.546100   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:42.546132   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:42.616561   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:42.616589   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:42.616604   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:42.697269   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:42.697305   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:42.737787   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:42.737821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.293788   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:45.309020   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:45.309080   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:45.349218   80228 cri.go:89] found id: ""
	I0814 17:38:45.349246   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.349254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:45.349260   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:45.349318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:45.387622   80228 cri.go:89] found id: ""
	I0814 17:38:45.387653   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.387664   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:45.387672   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:45.387750   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:45.422120   80228 cri.go:89] found id: ""
	I0814 17:38:45.422154   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.422164   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:45.422169   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:45.422226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:45.457309   80228 cri.go:89] found id: ""
	I0814 17:38:45.457337   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.457352   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:45.457361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:45.457412   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:45.488969   80228 cri.go:89] found id: ""
	I0814 17:38:45.489000   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.489011   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:45.489019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:45.489081   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:45.522230   80228 cri.go:89] found id: ""
	I0814 17:38:45.522258   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.522273   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:45.522280   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:45.522345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:45.555394   80228 cri.go:89] found id: ""
	I0814 17:38:45.555425   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.555440   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:45.555448   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:45.555520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:45.587870   80228 cri.go:89] found id: ""
	I0814 17:38:45.587899   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.587910   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:45.587934   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:45.587951   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.638662   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:45.638709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:45.652217   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:45.652248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:45.733611   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:45.733635   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:45.733648   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:45.822733   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:45.822773   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:44.013405   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:46.014164   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:45.445289   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:47.944672   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:46.760279   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:49.260108   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:48.361519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:48.374848   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:48.374916   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:48.410849   80228 cri.go:89] found id: ""
	I0814 17:38:48.410897   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.410911   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:48.410920   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:48.410986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:48.448507   80228 cri.go:89] found id: ""
	I0814 17:38:48.448530   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.448537   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:48.448543   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:48.448594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:48.486257   80228 cri.go:89] found id: ""
	I0814 17:38:48.486298   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.486306   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:48.486312   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:48.486363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:48.520447   80228 cri.go:89] found id: ""
	I0814 17:38:48.520473   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.520482   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:48.520487   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:48.520544   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:48.552659   80228 cri.go:89] found id: ""
	I0814 17:38:48.552690   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.552698   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:48.552704   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:48.552768   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:48.585302   80228 cri.go:89] found id: ""
	I0814 17:38:48.585331   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.585341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:48.585348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:48.585415   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:48.617388   80228 cri.go:89] found id: ""
	I0814 17:38:48.617417   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.617428   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:48.617435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:48.617503   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:48.658987   80228 cri.go:89] found id: ""
	I0814 17:38:48.659012   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.659019   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:48.659027   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:48.659041   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:48.719882   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:48.719915   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:48.738962   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:48.738994   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:48.807703   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:48.807727   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:48.807739   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:48.886555   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:48.886585   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:48.514199   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.013627   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:50.444135   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:52.444957   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:54.446434   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.760518   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:54.260283   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.423653   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:51.436700   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:51.436792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:51.473198   80228 cri.go:89] found id: ""
	I0814 17:38:51.473227   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.473256   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:51.473262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:51.473311   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:51.508631   80228 cri.go:89] found id: ""
	I0814 17:38:51.508664   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.508675   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:51.508682   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:51.508743   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:51.540917   80228 cri.go:89] found id: ""
	I0814 17:38:51.540950   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.540958   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:51.540965   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:51.541014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:51.578112   80228 cri.go:89] found id: ""
	I0814 17:38:51.578140   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.578150   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:51.578158   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:51.578220   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:51.612664   80228 cri.go:89] found id: ""
	I0814 17:38:51.612692   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.612700   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:51.612706   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:51.612756   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:51.646374   80228 cri.go:89] found id: ""
	I0814 17:38:51.646399   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.646407   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:51.646413   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:51.646463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:51.682052   80228 cri.go:89] found id: ""
	I0814 17:38:51.682081   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.682092   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:51.682098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:51.682147   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:51.722625   80228 cri.go:89] found id: ""
	I0814 17:38:51.722653   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.722663   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:51.722674   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:51.722687   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:51.771788   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:51.771818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:51.785403   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:51.785432   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:51.854249   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:51.854269   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:51.854281   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:51.938121   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:51.938155   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.475672   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:54.491309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:54.491399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:54.524971   80228 cri.go:89] found id: ""
	I0814 17:38:54.525001   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.525011   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:54.525023   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:54.525087   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:54.560631   80228 cri.go:89] found id: ""
	I0814 17:38:54.560661   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.560670   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:54.560675   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:54.560728   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:54.595710   80228 cri.go:89] found id: ""
	I0814 17:38:54.595740   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.595751   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:54.595759   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:54.595824   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:54.631449   80228 cri.go:89] found id: ""
	I0814 17:38:54.631476   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.631487   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:54.631495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:54.631557   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:54.666492   80228 cri.go:89] found id: ""
	I0814 17:38:54.666526   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.666539   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:54.666548   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:54.666617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:54.701111   80228 cri.go:89] found id: ""
	I0814 17:38:54.701146   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.701158   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:54.701166   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:54.701226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:54.737535   80228 cri.go:89] found id: ""
	I0814 17:38:54.737574   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.737585   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:54.737595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:54.737653   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:54.771658   80228 cri.go:89] found id: ""
	I0814 17:38:54.771679   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.771686   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:54.771694   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:54.771709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:54.841798   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:54.841817   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:54.841829   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:54.930861   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:54.930917   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.970508   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:54.970540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:55.023077   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:55.023123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:53.513137   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.014005   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.945376   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:59.445437   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.260436   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:58.759613   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:57.538876   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:57.551796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:57.551868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:57.584576   80228 cri.go:89] found id: ""
	I0814 17:38:57.584601   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.584609   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:57.584617   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:57.584687   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:57.617209   80228 cri.go:89] found id: ""
	I0814 17:38:57.617239   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.617249   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:57.617257   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:57.617338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:57.650062   80228 cri.go:89] found id: ""
	I0814 17:38:57.650089   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.650096   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:57.650102   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:57.650160   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:57.681118   80228 cri.go:89] found id: ""
	I0814 17:38:57.681146   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.681154   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:57.681160   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:57.681228   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:57.713803   80228 cri.go:89] found id: ""
	I0814 17:38:57.713834   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.713842   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:57.713851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:57.713920   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:57.749555   80228 cri.go:89] found id: ""
	I0814 17:38:57.749594   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.749604   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:57.749613   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:57.749677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:57.782714   80228 cri.go:89] found id: ""
	I0814 17:38:57.782744   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.782755   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:57.782763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:57.782826   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:57.815386   80228 cri.go:89] found id: ""
	I0814 17:38:57.815414   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.815423   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:57.815436   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:57.815450   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:57.868153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:57.868183   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:57.881022   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:57.881053   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:57.950474   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:57.950501   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:57.950515   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:58.032611   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:58.032644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:00.569493   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:00.583257   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:00.583384   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:00.614680   80228 cri.go:89] found id: ""
	I0814 17:39:00.614712   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.614723   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:00.614732   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:00.614792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:00.648161   80228 cri.go:89] found id: ""
	I0814 17:39:00.648189   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.648196   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:00.648203   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:00.648256   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:00.681844   80228 cri.go:89] found id: ""
	I0814 17:39:00.681872   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.681883   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:00.681890   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:00.681952   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:00.714773   80228 cri.go:89] found id: ""
	I0814 17:39:00.714804   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.714815   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:00.714823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:00.714891   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:00.747748   80228 cri.go:89] found id: ""
	I0814 17:39:00.747774   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.747781   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:00.747787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:00.747845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:00.783135   80228 cri.go:89] found id: ""
	I0814 17:39:00.783168   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.783179   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:00.783186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:00.783242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:00.817505   80228 cri.go:89] found id: ""
	I0814 17:39:00.817541   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.817552   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:00.817567   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:00.817633   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:00.849205   80228 cri.go:89] found id: ""
	I0814 17:39:00.849231   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.849241   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:00.849252   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:00.849273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:00.902529   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:00.902567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:00.916313   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:00.916346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:00.988708   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:00.988725   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:00.988737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:01.063818   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:01.063853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:58.512313   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:00.513694   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:01.944987   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.945640   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:00.759979   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.259928   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.603241   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:03.616400   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:03.616504   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:03.649580   80228 cri.go:89] found id: ""
	I0814 17:39:03.649619   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.649637   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:03.649650   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:03.649718   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:03.686252   80228 cri.go:89] found id: ""
	I0814 17:39:03.686274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.686282   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:03.686289   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:03.686349   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:03.720995   80228 cri.go:89] found id: ""
	I0814 17:39:03.721024   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.721036   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:03.721043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:03.721094   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:03.753466   80228 cri.go:89] found id: ""
	I0814 17:39:03.753491   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.753500   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:03.753506   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:03.753554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:03.794427   80228 cri.go:89] found id: ""
	I0814 17:39:03.794450   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.794458   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:03.794464   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:03.794524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:03.826245   80228 cri.go:89] found id: ""
	I0814 17:39:03.826274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.826282   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:03.826288   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:03.826355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:03.857208   80228 cri.go:89] found id: ""
	I0814 17:39:03.857238   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.857247   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:03.857253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:03.857325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:03.892840   80228 cri.go:89] found id: ""
	I0814 17:39:03.892864   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.892871   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:03.892879   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:03.892891   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:03.948554   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:03.948579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:03.962222   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:03.962249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:04.031833   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:04.031859   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:04.031875   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:04.109572   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:04.109636   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:03.013542   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:05.513201   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:06.444222   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:08.444833   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:05.759653   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:07.760063   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.259961   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:06.646923   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:06.659699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:06.659757   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:06.691918   80228 cri.go:89] found id: ""
	I0814 17:39:06.691941   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.691951   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:06.691958   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:06.692016   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:06.722675   80228 cri.go:89] found id: ""
	I0814 17:39:06.722703   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.722713   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:06.722720   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:06.722782   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:06.757215   80228 cri.go:89] found id: ""
	I0814 17:39:06.757248   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.757259   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:06.757266   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:06.757333   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:06.791337   80228 cri.go:89] found id: ""
	I0814 17:39:06.791370   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.791378   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:06.791384   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:06.791440   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:06.825182   80228 cri.go:89] found id: ""
	I0814 17:39:06.825209   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.825220   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:06.825234   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:06.825288   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:06.857473   80228 cri.go:89] found id: ""
	I0814 17:39:06.857498   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.857507   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:06.857514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:06.857582   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:06.891293   80228 cri.go:89] found id: ""
	I0814 17:39:06.891343   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.891355   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:06.891363   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:06.891421   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:06.927476   80228 cri.go:89] found id: ""
	I0814 17:39:06.927505   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.927516   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:06.927527   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:06.927541   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:06.980604   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:06.980635   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:06.994648   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:06.994678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:07.072554   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:07.072580   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:07.072599   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:07.153141   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:07.153182   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:09.693348   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:09.705754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:09.705814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:09.739674   80228 cri.go:89] found id: ""
	I0814 17:39:09.739706   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.739717   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:09.739724   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:09.739788   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:09.774381   80228 cri.go:89] found id: ""
	I0814 17:39:09.774405   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.774413   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:09.774420   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:09.774478   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:09.806586   80228 cri.go:89] found id: ""
	I0814 17:39:09.806614   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.806623   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:09.806629   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:09.806696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:09.839564   80228 cri.go:89] found id: ""
	I0814 17:39:09.839594   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.839602   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:09.839614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:09.839672   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:09.872338   80228 cri.go:89] found id: ""
	I0814 17:39:09.872373   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.872385   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:09.872393   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:09.872457   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:09.904184   80228 cri.go:89] found id: ""
	I0814 17:39:09.904223   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.904231   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:09.904253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:09.904312   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:09.937217   80228 cri.go:89] found id: ""
	I0814 17:39:09.937242   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.937251   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:09.937259   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:09.937322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:09.972273   80228 cri.go:89] found id: ""
	I0814 17:39:09.972301   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.972313   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:09.972325   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:09.972341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:10.023736   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:10.023764   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:10.036654   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:10.036681   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:10.104031   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:10.104052   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:10.104068   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:10.187816   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:10.187853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:08.013632   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.513090   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.944491   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.945542   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.260129   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:14.759867   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.727237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:12.741970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:12.742041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:12.778721   80228 cri.go:89] found id: ""
	I0814 17:39:12.778748   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.778758   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:12.778765   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:12.778820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:12.812575   80228 cri.go:89] found id: ""
	I0814 17:39:12.812603   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.812610   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:12.812619   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:12.812678   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:12.845697   80228 cri.go:89] found id: ""
	I0814 17:39:12.845726   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.845737   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:12.845744   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:12.845809   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:12.879491   80228 cri.go:89] found id: ""
	I0814 17:39:12.879518   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.879529   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:12.879536   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:12.879604   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:12.912321   80228 cri.go:89] found id: ""
	I0814 17:39:12.912348   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.912356   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:12.912361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:12.912410   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:12.948866   80228 cri.go:89] found id: ""
	I0814 17:39:12.948889   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.948897   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:12.948903   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:12.948963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:12.983394   80228 cri.go:89] found id: ""
	I0814 17:39:12.983444   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.983459   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:12.983466   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:12.983530   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:13.018406   80228 cri.go:89] found id: ""
	I0814 17:39:13.018427   80228 logs.go:276] 0 containers: []
	W0814 17:39:13.018434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:13.018442   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:13.018457   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.069615   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:13.069655   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:13.082618   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:13.082651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:13.145033   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:13.145054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:13.145067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:13.225074   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:13.225108   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:15.765512   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:15.778320   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:15.778380   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:15.812847   80228 cri.go:89] found id: ""
	I0814 17:39:15.812876   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.812885   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:15.812896   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:15.812944   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:15.845131   80228 cri.go:89] found id: ""
	I0814 17:39:15.845159   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.845169   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:15.845176   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:15.845242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:15.879763   80228 cri.go:89] found id: ""
	I0814 17:39:15.879789   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.879799   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:15.879807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:15.879864   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:15.912746   80228 cri.go:89] found id: ""
	I0814 17:39:15.912776   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.912784   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:15.912797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:15.912858   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:15.946433   80228 cri.go:89] found id: ""
	I0814 17:39:15.946456   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.946465   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:15.946473   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:15.946534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:15.980060   80228 cri.go:89] found id: ""
	I0814 17:39:15.980086   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.980096   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:15.980103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:15.980167   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:16.011539   80228 cri.go:89] found id: ""
	I0814 17:39:16.011570   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.011581   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:16.011590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:16.011660   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:16.046019   80228 cri.go:89] found id: ""
	I0814 17:39:16.046046   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.046057   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:16.046068   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:16.046083   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:16.058442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:16.058470   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:16.132775   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:16.132799   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:16.132811   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:16.218360   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:16.218398   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:16.258070   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:16.258096   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.013275   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:15.013967   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:15.444280   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:17.444827   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.943845   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:16.760773   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.259891   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:18.813127   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:18.826187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:18.826267   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:18.858405   80228 cri.go:89] found id: ""
	I0814 17:39:18.858433   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.858444   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:18.858452   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:18.858524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:18.893302   80228 cri.go:89] found id: ""
	I0814 17:39:18.893335   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.893342   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:18.893350   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:18.893417   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:18.929885   80228 cri.go:89] found id: ""
	I0814 17:39:18.929919   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.929929   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:18.929937   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:18.930000   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:18.966758   80228 cri.go:89] found id: ""
	I0814 17:39:18.966783   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.966792   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:18.966799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:18.966861   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:18.999815   80228 cri.go:89] found id: ""
	I0814 17:39:18.999838   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.999845   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:18.999851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:18.999915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:19.033737   80228 cri.go:89] found id: ""
	I0814 17:39:19.033761   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.033768   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:19.033774   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:19.033830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:19.070080   80228 cri.go:89] found id: ""
	I0814 17:39:19.070105   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.070113   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:19.070119   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:19.070190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:19.102868   80228 cri.go:89] found id: ""
	I0814 17:39:19.102897   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.102907   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:19.102918   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:19.102932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:19.156525   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:19.156569   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:19.170193   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:19.170225   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:19.236521   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:19.236547   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:19.236561   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:19.315984   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:19.316024   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:17.512553   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.513046   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.513082   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:22.444948   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:24.945111   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.260362   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:23.260567   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.855554   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:21.868457   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:21.868527   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:21.902098   80228 cri.go:89] found id: ""
	I0814 17:39:21.902124   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.902132   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:21.902139   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:21.902200   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:21.934876   80228 cri.go:89] found id: ""
	I0814 17:39:21.934908   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.934919   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:21.934926   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:21.934987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:21.976507   80228 cri.go:89] found id: ""
	I0814 17:39:21.976536   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.976548   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:21.976555   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:21.976617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:22.013876   80228 cri.go:89] found id: ""
	I0814 17:39:22.013897   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.013904   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:22.013909   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:22.013955   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:22.051943   80228 cri.go:89] found id: ""
	I0814 17:39:22.051969   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.051979   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:22.051999   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:22.052064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:22.084702   80228 cri.go:89] found id: ""
	I0814 17:39:22.084725   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.084733   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:22.084738   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:22.084784   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:22.117397   80228 cri.go:89] found id: ""
	I0814 17:39:22.117424   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.117432   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:22.117439   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:22.117490   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:22.154139   80228 cri.go:89] found id: ""
	I0814 17:39:22.154168   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.154178   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:22.154189   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:22.154201   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:22.205550   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:22.205580   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:22.219644   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:22.219679   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:22.288934   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:22.288957   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:22.288969   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:22.372917   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:22.372954   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:24.912578   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:24.925365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:24.925430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:24.961207   80228 cri.go:89] found id: ""
	I0814 17:39:24.961234   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.961248   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:24.961257   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:24.961339   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:24.998878   80228 cri.go:89] found id: ""
	I0814 17:39:24.998904   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.998911   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:24.998918   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:24.998971   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:25.034141   80228 cri.go:89] found id: ""
	I0814 17:39:25.034174   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.034187   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:25.034196   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:25.034274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:25.075634   80228 cri.go:89] found id: ""
	I0814 17:39:25.075667   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.075679   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:25.075688   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:25.075759   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:25.112890   80228 cri.go:89] found id: ""
	I0814 17:39:25.112929   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.112939   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:25.112946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:25.113007   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:25.152887   80228 cri.go:89] found id: ""
	I0814 17:39:25.152913   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.152921   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:25.152927   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:25.152987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:25.186421   80228 cri.go:89] found id: ""
	I0814 17:39:25.186452   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.186463   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:25.186471   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:25.186537   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:25.220390   80228 cri.go:89] found id: ""
	I0814 17:39:25.220417   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.220425   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:25.220432   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:25.220446   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:25.296112   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:25.296146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:25.335421   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:25.335449   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:25.387690   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:25.387718   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:25.401926   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:25.401957   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:25.471111   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:24.012534   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:26.513529   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.445280   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:29.445416   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:25.759098   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.759924   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:30.259610   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.972237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:27.985512   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:27.985575   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:28.019454   80228 cri.go:89] found id: ""
	I0814 17:39:28.019482   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.019493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:28.019502   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:28.019566   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:28.056908   80228 cri.go:89] found id: ""
	I0814 17:39:28.056931   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.056939   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:28.056944   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:28.056998   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:28.090678   80228 cri.go:89] found id: ""
	I0814 17:39:28.090707   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.090715   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:28.090721   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:28.090785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:28.125557   80228 cri.go:89] found id: ""
	I0814 17:39:28.125591   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.125609   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:28.125620   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:28.125682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:28.158092   80228 cri.go:89] found id: ""
	I0814 17:39:28.158121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.158129   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:28.158135   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:28.158191   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:28.193403   80228 cri.go:89] found id: ""
	I0814 17:39:28.193434   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.193445   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:28.193454   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:28.193524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:28.231095   80228 cri.go:89] found id: ""
	I0814 17:39:28.231121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.231131   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:28.231139   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:28.231203   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:28.280157   80228 cri.go:89] found id: ""
	I0814 17:39:28.280185   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.280196   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:28.280207   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:28.280220   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:28.352877   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:28.352894   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:28.352906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:28.439692   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:28.439736   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:28.479986   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:28.480012   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:28.538454   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:28.538493   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.052941   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:31.065810   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:31.065879   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:31.097988   80228 cri.go:89] found id: ""
	I0814 17:39:31.098013   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.098020   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:31.098045   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:31.098102   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:31.130126   80228 cri.go:89] found id: ""
	I0814 17:39:31.130152   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.130160   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:31.130166   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:31.130225   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:31.165945   80228 cri.go:89] found id: ""
	I0814 17:39:31.165984   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.165995   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:31.166003   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:31.166064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:31.199749   80228 cri.go:89] found id: ""
	I0814 17:39:31.199772   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.199778   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:31.199784   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:31.199843   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:31.231398   80228 cri.go:89] found id: ""
	I0814 17:39:31.231425   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.231436   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:31.231444   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:31.231528   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:31.263842   80228 cri.go:89] found id: ""
	I0814 17:39:31.263868   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.263878   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:31.263885   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:31.263950   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:31.299258   80228 cri.go:89] found id: ""
	I0814 17:39:31.299289   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.299301   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:31.299309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:31.299399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:29.013468   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.013638   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.445769   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:33.944939   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:32.260117   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:34.262303   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.332626   80228 cri.go:89] found id: ""
	I0814 17:39:31.332649   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.332657   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:31.332666   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:31.332678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:31.369262   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:31.369288   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:31.426003   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:31.426034   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.439583   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:31.439611   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:31.507538   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:31.507563   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:31.507583   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.085342   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:34.097491   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:34.097567   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:34.129220   80228 cri.go:89] found id: ""
	I0814 17:39:34.129244   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.129254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:34.129262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:34.129322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:34.161233   80228 cri.go:89] found id: ""
	I0814 17:39:34.161256   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.161264   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:34.161270   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:34.161334   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:34.193649   80228 cri.go:89] found id: ""
	I0814 17:39:34.193675   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.193683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:34.193689   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:34.193754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:34.226722   80228 cri.go:89] found id: ""
	I0814 17:39:34.226753   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.226763   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:34.226772   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:34.226842   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:34.259735   80228 cri.go:89] found id: ""
	I0814 17:39:34.259761   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.259774   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:34.259787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:34.259850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:34.296804   80228 cri.go:89] found id: ""
	I0814 17:39:34.296830   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.296838   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:34.296844   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:34.296894   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:34.328942   80228 cri.go:89] found id: ""
	I0814 17:39:34.328973   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.328982   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:34.328988   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:34.329041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:34.360820   80228 cri.go:89] found id: ""
	I0814 17:39:34.360847   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.360858   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:34.360868   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:34.360882   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:34.411106   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:34.411142   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:34.424737   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:34.424769   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:34.489094   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:34.489122   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:34.489138   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.569783   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:34.569818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:33.015308   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:35.513073   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:35.945264   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:38.444913   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:36.760740   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:39.260499   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:37.107492   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:37.120829   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:37.120901   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:37.154556   80228 cri.go:89] found id: ""
	I0814 17:39:37.154589   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.154601   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:37.154609   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:37.154673   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:37.192570   80228 cri.go:89] found id: ""
	I0814 17:39:37.192602   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.192609   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:37.192615   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:37.192679   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:37.225845   80228 cri.go:89] found id: ""
	I0814 17:39:37.225891   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.225902   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:37.225917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:37.225986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:37.262370   80228 cri.go:89] found id: ""
	I0814 17:39:37.262399   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.262408   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:37.262416   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:37.262481   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:37.297642   80228 cri.go:89] found id: ""
	I0814 17:39:37.297669   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.297680   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:37.297687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:37.297754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:37.331006   80228 cri.go:89] found id: ""
	I0814 17:39:37.331032   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.331041   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:37.331046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:37.331111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:37.364753   80228 cri.go:89] found id: ""
	I0814 17:39:37.364777   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.364786   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:37.364792   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:37.364850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:37.397722   80228 cri.go:89] found id: ""
	I0814 17:39:37.397748   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.397760   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:37.397770   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:37.397785   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:37.473616   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:37.473643   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:37.473659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:37.557672   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:37.557710   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.596337   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:37.596368   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:37.646815   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:37.646853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.160391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:40.174099   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:40.174181   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:40.208783   80228 cri.go:89] found id: ""
	I0814 17:39:40.208814   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.208821   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:40.208829   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:40.208880   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:40.243555   80228 cri.go:89] found id: ""
	I0814 17:39:40.243580   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.243588   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:40.243594   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:40.243661   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:40.276685   80228 cri.go:89] found id: ""
	I0814 17:39:40.276711   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.276723   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:40.276731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:40.276795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:40.309893   80228 cri.go:89] found id: ""
	I0814 17:39:40.309925   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.309937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:40.309944   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:40.310073   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:40.341724   80228 cri.go:89] found id: ""
	I0814 17:39:40.341751   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.341762   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:40.341770   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:40.341834   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:40.376442   80228 cri.go:89] found id: ""
	I0814 17:39:40.376478   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.376487   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:40.376495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:40.376558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:40.419240   80228 cri.go:89] found id: ""
	I0814 17:39:40.419269   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.419277   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:40.419284   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:40.419374   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:40.464678   80228 cri.go:89] found id: ""
	I0814 17:39:40.464703   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.464712   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:40.464721   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:40.464737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:40.531138   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:40.531175   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.546809   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:40.546842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:40.618791   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:40.618809   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:40.618821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:40.706169   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:40.706219   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.513604   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:40.013349   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:40.445989   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:42.944417   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:41.261429   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:43.760436   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:43.250987   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:43.266109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:43.266179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:43.301860   80228 cri.go:89] found id: ""
	I0814 17:39:43.301891   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.301899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:43.301908   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:43.301991   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:43.337166   80228 cri.go:89] found id: ""
	I0814 17:39:43.337195   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.337205   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:43.337212   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:43.337262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:43.370640   80228 cri.go:89] found id: ""
	I0814 17:39:43.370671   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.370683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:43.370696   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:43.370752   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:43.405598   80228 cri.go:89] found id: ""
	I0814 17:39:43.405624   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.405632   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:43.405638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:43.405705   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:43.437161   80228 cri.go:89] found id: ""
	I0814 17:39:43.437184   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.437192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:43.437198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:43.437295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:43.470675   80228 cri.go:89] found id: ""
	I0814 17:39:43.470707   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.470718   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:43.470726   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:43.470787   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:43.503036   80228 cri.go:89] found id: ""
	I0814 17:39:43.503062   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.503073   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:43.503081   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:43.503149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:43.538269   80228 cri.go:89] found id: ""
	I0814 17:39:43.538296   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.538304   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:43.538328   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:43.538340   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:43.621889   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:43.621936   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:43.667460   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:43.667491   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:43.723630   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:43.723663   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:43.738905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:43.738939   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:43.805484   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:46.306031   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:42.512438   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:44.513112   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.513203   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:45.445470   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:47.944790   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.260236   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:48.260662   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.324624   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:46.324696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:46.360039   80228 cri.go:89] found id: ""
	I0814 17:39:46.360066   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.360074   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:46.360082   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:46.360131   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:46.413735   80228 cri.go:89] found id: ""
	I0814 17:39:46.413767   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.413779   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:46.413788   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:46.413876   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:46.458823   80228 cri.go:89] found id: ""
	I0814 17:39:46.458851   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.458861   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:46.458869   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:46.458928   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:46.495347   80228 cri.go:89] found id: ""
	I0814 17:39:46.495378   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.495387   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:46.495392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:46.495441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:46.531502   80228 cri.go:89] found id: ""
	I0814 17:39:46.531533   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.531545   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:46.531554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:46.531624   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:46.564450   80228 cri.go:89] found id: ""
	I0814 17:39:46.564473   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.564482   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:46.564488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:46.564535   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:46.598293   80228 cri.go:89] found id: ""
	I0814 17:39:46.598401   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.598421   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:46.598431   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:46.598498   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:46.632370   80228 cri.go:89] found id: ""
	I0814 17:39:46.632400   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.632411   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:46.632423   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:46.632438   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:46.711814   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:46.711848   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:46.749410   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:46.749443   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:46.801686   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:46.801720   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:46.815196   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:46.815218   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:46.885648   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.386223   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:49.399359   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:49.399430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:49.432133   80228 cri.go:89] found id: ""
	I0814 17:39:49.432168   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.432179   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:49.432186   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:49.432250   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:49.469760   80228 cri.go:89] found id: ""
	I0814 17:39:49.469790   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.469799   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:49.469811   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:49.469873   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:49.500437   80228 cri.go:89] found id: ""
	I0814 17:39:49.500466   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.500474   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:49.500481   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:49.500531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:49.533685   80228 cri.go:89] found id: ""
	I0814 17:39:49.533709   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.533717   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:49.533723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:49.533790   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:49.570551   80228 cri.go:89] found id: ""
	I0814 17:39:49.570577   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.570584   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:49.570590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:49.570654   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:49.606649   80228 cri.go:89] found id: ""
	I0814 17:39:49.606672   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.606680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:49.606686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:49.606734   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:49.638060   80228 cri.go:89] found id: ""
	I0814 17:39:49.638090   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.638101   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:49.638109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:49.638178   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:49.674503   80228 cri.go:89] found id: ""
	I0814 17:39:49.674526   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.674534   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:49.674543   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:49.674563   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:49.710185   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:49.710213   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:49.764112   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:49.764146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:49.777862   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:49.777888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:49.849786   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.849806   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:49.849819   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:48.513418   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:51.013242   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:50.444526   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.444788   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:54.944646   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:50.759890   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.760236   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:54.760324   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.429811   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:52.444364   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:52.444441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:52.483047   80228 cri.go:89] found id: ""
	I0814 17:39:52.483074   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.483085   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:52.483093   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:52.483157   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:52.520236   80228 cri.go:89] found id: ""
	I0814 17:39:52.520264   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.520274   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:52.520287   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:52.520353   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:52.553757   80228 cri.go:89] found id: ""
	I0814 17:39:52.553784   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.553795   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:52.553802   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:52.553869   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:52.588782   80228 cri.go:89] found id: ""
	I0814 17:39:52.588808   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.588818   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:52.588827   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:52.588893   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:52.620144   80228 cri.go:89] found id: ""
	I0814 17:39:52.620180   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.620192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:52.620201   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:52.620274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:52.652712   80228 cri.go:89] found id: ""
	I0814 17:39:52.652743   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.652755   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:52.652763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:52.652825   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:52.687789   80228 cri.go:89] found id: ""
	I0814 17:39:52.687819   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.687831   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:52.687838   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:52.687892   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:52.718996   80228 cri.go:89] found id: ""
	I0814 17:39:52.719021   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.719031   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:52.719041   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:52.719055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:52.775775   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:52.775808   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:52.789024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:52.789055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:52.863320   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:52.863351   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:52.863366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:52.941533   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:52.941571   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:55.477833   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:55.490723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:55.490783   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:55.525816   80228 cri.go:89] found id: ""
	I0814 17:39:55.525844   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.525852   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:55.525859   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:55.525908   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:55.561855   80228 cri.go:89] found id: ""
	I0814 17:39:55.561878   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.561887   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:55.561892   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:55.561949   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:55.599997   80228 cri.go:89] found id: ""
	I0814 17:39:55.600027   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.600038   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:55.600046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:55.600112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:55.632869   80228 cri.go:89] found id: ""
	I0814 17:39:55.632902   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.632914   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:55.632922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:55.632990   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:55.666029   80228 cri.go:89] found id: ""
	I0814 17:39:55.666055   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.666066   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:55.666079   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:55.666136   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:55.697222   80228 cri.go:89] found id: ""
	I0814 17:39:55.697247   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.697254   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:55.697260   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:55.697308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:55.729517   80228 cri.go:89] found id: ""
	I0814 17:39:55.729549   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.729561   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:55.729576   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:55.729640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:55.763890   80228 cri.go:89] found id: ""
	I0814 17:39:55.763922   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.763934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:55.763944   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:55.763960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:55.819588   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:55.819624   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:55.833281   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:55.833314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:55.904610   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:55.904632   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:55.904644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:55.981035   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:55.981069   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:53.513407   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:55.513734   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:56.945649   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:59.444937   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:57.259832   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:59.760669   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:58.522870   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:58.536151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:58.536224   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:58.568827   80228 cri.go:89] found id: ""
	I0814 17:39:58.568857   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.568869   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:58.568877   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:58.568946   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:58.600523   80228 cri.go:89] found id: ""
	I0814 17:39:58.600554   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.600564   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:58.600571   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:58.600640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:58.634201   80228 cri.go:89] found id: ""
	I0814 17:39:58.634232   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.634240   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:58.634245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:58.634308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:58.668746   80228 cri.go:89] found id: ""
	I0814 17:39:58.668772   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.668781   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:58.668787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:58.668847   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:58.699695   80228 cri.go:89] found id: ""
	I0814 17:39:58.699727   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.699739   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:58.699752   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:58.699836   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:58.731047   80228 cri.go:89] found id: ""
	I0814 17:39:58.731081   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.731095   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:58.731103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:58.731168   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:58.773454   80228 cri.go:89] found id: ""
	I0814 17:39:58.773486   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.773495   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:58.773501   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:58.773561   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:58.810135   80228 cri.go:89] found id: ""
	I0814 17:39:58.810159   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.810166   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:58.810175   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:58.810191   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:58.844897   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:58.844925   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:58.901700   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:58.901745   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:58.914272   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:58.914296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:58.984593   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:58.984610   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:58.984622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:57.513854   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:00.013241   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:01.945861   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.444575   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:02.262241   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.760164   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:01.563227   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:01.576764   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:01.576840   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:01.610842   80228 cri.go:89] found id: ""
	I0814 17:40:01.610871   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.610878   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:01.610884   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:01.610935   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:01.643774   80228 cri.go:89] found id: ""
	I0814 17:40:01.643806   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.643816   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:01.643824   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:01.643888   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:01.677867   80228 cri.go:89] found id: ""
	I0814 17:40:01.677892   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.677899   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:01.677906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:01.677967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:01.712394   80228 cri.go:89] found id: ""
	I0814 17:40:01.712420   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.712427   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:01.712433   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:01.712492   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:01.745637   80228 cri.go:89] found id: ""
	I0814 17:40:01.745666   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.745676   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:01.745683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:01.745745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:01.782364   80228 cri.go:89] found id: ""
	I0814 17:40:01.782394   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.782404   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:01.782411   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:01.782484   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:01.814569   80228 cri.go:89] found id: ""
	I0814 17:40:01.814596   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.814605   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:01.814614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:01.814674   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:01.850421   80228 cri.go:89] found id: ""
	I0814 17:40:01.850450   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.850459   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:01.850468   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:01.850482   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:01.862965   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:01.863001   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:01.931312   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:01.931357   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:01.931375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:02.008236   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:02.008278   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:02.043238   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:02.043267   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:04.596909   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:04.610091   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:04.610158   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:04.645169   80228 cri.go:89] found id: ""
	I0814 17:40:04.645195   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.645205   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:04.645213   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:04.645279   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:04.677708   80228 cri.go:89] found id: ""
	I0814 17:40:04.677740   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.677750   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:04.677761   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:04.677823   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:04.710319   80228 cri.go:89] found id: ""
	I0814 17:40:04.710351   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.710362   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:04.710374   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:04.710443   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:04.745166   80228 cri.go:89] found id: ""
	I0814 17:40:04.745202   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.745219   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:04.745226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:04.745287   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:04.777307   80228 cri.go:89] found id: ""
	I0814 17:40:04.777354   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.777376   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:04.777383   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:04.777447   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:04.813854   80228 cri.go:89] found id: ""
	I0814 17:40:04.813886   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.813901   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:04.813908   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:04.813972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:04.848014   80228 cri.go:89] found id: ""
	I0814 17:40:04.848041   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.848049   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:04.848055   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:04.848113   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:04.882689   80228 cri.go:89] found id: ""
	I0814 17:40:04.882719   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.882731   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:04.882742   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:04.882760   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:04.952074   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:04.952096   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:04.952112   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:05.030258   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:05.030300   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:05.066509   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:05.066542   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:05.120153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:05.120195   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:02.512935   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.513254   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:06.445637   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:08.945142   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:06.760223   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:08.760857   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:07.634404   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:07.646900   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:07.646966   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:07.678654   80228 cri.go:89] found id: ""
	I0814 17:40:07.678680   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.678689   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:07.678696   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:07.678753   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:07.711355   80228 cri.go:89] found id: ""
	I0814 17:40:07.711381   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.711389   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:07.711395   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:07.711446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:07.744134   80228 cri.go:89] found id: ""
	I0814 17:40:07.744161   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.744169   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:07.744179   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:07.744242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:07.776981   80228 cri.go:89] found id: ""
	I0814 17:40:07.777008   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.777015   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:07.777022   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:07.777086   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:07.811626   80228 cri.go:89] found id: ""
	I0814 17:40:07.811651   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.811661   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:07.811667   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:07.811720   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:07.843218   80228 cri.go:89] found id: ""
	I0814 17:40:07.843251   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.843262   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:07.843270   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:07.843355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:07.875208   80228 cri.go:89] found id: ""
	I0814 17:40:07.875232   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.875239   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:07.875245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:07.875295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:07.907896   80228 cri.go:89] found id: ""
	I0814 17:40:07.907923   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.907934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:07.907945   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:07.907960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:07.959717   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:07.959753   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:07.973050   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:07.973081   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:08.035085   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:08.035107   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:08.035120   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:08.109722   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:08.109770   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:10.648203   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:10.661194   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:10.661280   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:10.698401   80228 cri.go:89] found id: ""
	I0814 17:40:10.698431   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.698442   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:10.698450   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:10.698515   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:10.730057   80228 cri.go:89] found id: ""
	I0814 17:40:10.730083   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.730094   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:10.730101   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:10.730163   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:10.768780   80228 cri.go:89] found id: ""
	I0814 17:40:10.768807   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.768817   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:10.768824   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:10.768885   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:10.800866   80228 cri.go:89] found id: ""
	I0814 17:40:10.800898   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.800907   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:10.800917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:10.800984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:10.833741   80228 cri.go:89] found id: ""
	I0814 17:40:10.833771   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.833782   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:10.833789   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:10.833850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:10.865670   80228 cri.go:89] found id: ""
	I0814 17:40:10.865699   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.865706   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:10.865717   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:10.865770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:10.904726   80228 cri.go:89] found id: ""
	I0814 17:40:10.904757   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.904765   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:10.904771   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:10.904821   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:10.940549   80228 cri.go:89] found id: ""
	I0814 17:40:10.940578   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.940588   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:10.940598   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:10.940620   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:10.992592   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:10.992622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:11.006388   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:11.006412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:11.075455   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:11.075473   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:11.075486   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:11.156622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:11.156658   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:07.012878   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:09.013908   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.512592   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.444764   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.944931   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.259959   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.760823   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.695055   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:13.709460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:13.709531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:13.741941   80228 cri.go:89] found id: ""
	I0814 17:40:13.741967   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.741975   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:13.741981   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:13.742042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:13.773916   80228 cri.go:89] found id: ""
	I0814 17:40:13.773940   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.773947   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:13.773952   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:13.773999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:13.807871   80228 cri.go:89] found id: ""
	I0814 17:40:13.807902   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.807912   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:13.807918   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:13.807981   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:13.840902   80228 cri.go:89] found id: ""
	I0814 17:40:13.840931   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.840943   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:13.840952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:13.841018   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:13.871969   80228 cri.go:89] found id: ""
	I0814 17:40:13.871998   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.872010   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:13.872019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:13.872090   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:13.905502   80228 cri.go:89] found id: ""
	I0814 17:40:13.905524   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.905531   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:13.905537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:13.905599   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:13.937356   80228 cri.go:89] found id: ""
	I0814 17:40:13.937386   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.937396   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:13.937404   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:13.937466   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:13.972383   80228 cri.go:89] found id: ""
	I0814 17:40:13.972410   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.972418   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:13.972427   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:13.972448   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:14.022691   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:14.022717   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:14.035543   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:14.035567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:14.104869   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:14.104889   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:14.104905   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:14.182185   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:14.182221   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:13.513519   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.012958   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:15.945499   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:18.445122   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.259488   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:18.259706   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.259972   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.720519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:16.734323   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:16.734406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:16.769454   80228 cri.go:89] found id: ""
	I0814 17:40:16.769483   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.769493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:16.769501   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:16.769565   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:16.801513   80228 cri.go:89] found id: ""
	I0814 17:40:16.801541   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.801548   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:16.801554   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:16.801610   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:16.835184   80228 cri.go:89] found id: ""
	I0814 17:40:16.835212   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.835220   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:16.835226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:16.835275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:16.867162   80228 cri.go:89] found id: ""
	I0814 17:40:16.867192   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.867201   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:16.867207   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:16.867257   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:16.902912   80228 cri.go:89] found id: ""
	I0814 17:40:16.902942   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.902953   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:16.902961   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:16.903026   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:16.935004   80228 cri.go:89] found id: ""
	I0814 17:40:16.935033   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.935044   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:16.935052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:16.935115   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:16.969082   80228 cri.go:89] found id: ""
	I0814 17:40:16.969110   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.969120   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:16.969127   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:16.969194   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:17.002594   80228 cri.go:89] found id: ""
	I0814 17:40:17.002622   80228 logs.go:276] 0 containers: []
	W0814 17:40:17.002633   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:17.002644   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:17.002659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:17.054319   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:17.054359   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:17.068024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:17.068048   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:17.139480   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:17.139499   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:17.139514   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:17.222086   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:17.222140   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:19.758630   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:19.772186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:19.772254   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:19.807719   80228 cri.go:89] found id: ""
	I0814 17:40:19.807751   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.807760   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:19.807766   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:19.807830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:19.851023   80228 cri.go:89] found id: ""
	I0814 17:40:19.851054   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.851067   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:19.851083   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:19.851154   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:19.882961   80228 cri.go:89] found id: ""
	I0814 17:40:19.882987   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.882997   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:19.883005   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:19.883063   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:19.920312   80228 cri.go:89] found id: ""
	I0814 17:40:19.920345   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.920356   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:19.920365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:19.920430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:19.953628   80228 cri.go:89] found id: ""
	I0814 17:40:19.953658   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.953671   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:19.953683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:19.953741   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:19.984998   80228 cri.go:89] found id: ""
	I0814 17:40:19.985028   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.985036   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:19.985043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:19.985092   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:20.018728   80228 cri.go:89] found id: ""
	I0814 17:40:20.018753   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.018761   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:20.018766   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:20.018814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:20.050718   80228 cri.go:89] found id: ""
	I0814 17:40:20.050743   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.050757   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:20.050765   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:20.050777   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:20.101567   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:20.101602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:20.114890   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:20.114920   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:20.183926   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:20.183948   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:20.183960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:20.270195   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:20.270223   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:18.513348   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.513633   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.445352   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.945704   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.260365   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:24.760475   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.807078   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:22.820187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:22.820260   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:22.852474   80228 cri.go:89] found id: ""
	I0814 17:40:22.852504   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.852514   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:22.852522   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:22.852596   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:22.887141   80228 cri.go:89] found id: ""
	I0814 17:40:22.887167   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.887177   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:22.887184   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:22.887248   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:22.919384   80228 cri.go:89] found id: ""
	I0814 17:40:22.919417   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.919428   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:22.919436   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:22.919502   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:22.951877   80228 cri.go:89] found id: ""
	I0814 17:40:22.951897   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.951905   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:22.951910   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:22.951965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:22.987712   80228 cri.go:89] found id: ""
	I0814 17:40:22.987742   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.987752   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:22.987760   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:22.987832   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:23.025562   80228 cri.go:89] found id: ""
	I0814 17:40:23.025597   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.025608   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:23.025616   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:23.025680   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:23.058928   80228 cri.go:89] found id: ""
	I0814 17:40:23.058955   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.058962   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:23.058969   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:23.059025   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:23.096807   80228 cri.go:89] found id: ""
	I0814 17:40:23.096836   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.096847   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:23.096858   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:23.096874   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:23.148943   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:23.148977   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:23.161905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:23.161927   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:23.232119   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:23.232147   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:23.232160   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:23.320693   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:23.320731   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:25.858506   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:25.871891   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:25.871964   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:25.904732   80228 cri.go:89] found id: ""
	I0814 17:40:25.904760   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.904769   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:25.904775   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:25.904830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:25.936317   80228 cri.go:89] found id: ""
	I0814 17:40:25.936347   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.936358   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:25.936365   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:25.936427   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:25.969921   80228 cri.go:89] found id: ""
	I0814 17:40:25.969946   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.969954   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:25.969960   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:25.970009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:26.022832   80228 cri.go:89] found id: ""
	I0814 17:40:26.022862   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.022872   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:26.022880   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:26.022941   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:26.056178   80228 cri.go:89] found id: ""
	I0814 17:40:26.056206   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.056214   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:26.056224   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:26.056275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:26.086921   80228 cri.go:89] found id: ""
	I0814 17:40:26.086955   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.086966   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:26.086974   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:26.087031   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:26.120631   80228 cri.go:89] found id: ""
	I0814 17:40:26.120665   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.120677   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:26.120686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:26.120745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:26.154258   80228 cri.go:89] found id: ""
	I0814 17:40:26.154289   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.154300   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:26.154310   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:26.154324   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:26.208366   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:26.208405   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:26.222160   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:26.222192   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:26.294737   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:26.294756   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:26.294768   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:22.513813   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:25.013707   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:25.444691   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:27.944277   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.945043   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:27.260184   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.262080   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:26.372870   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:26.372906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:28.908165   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:28.920754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:28.920816   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:28.953950   80228 cri.go:89] found id: ""
	I0814 17:40:28.953971   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.953978   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:28.953987   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:28.954035   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:28.985228   80228 cri.go:89] found id: ""
	I0814 17:40:28.985266   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.985278   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:28.985286   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:28.985347   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:29.016295   80228 cri.go:89] found id: ""
	I0814 17:40:29.016328   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.016336   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:29.016341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:29.016392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:29.048664   80228 cri.go:89] found id: ""
	I0814 17:40:29.048696   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.048707   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:29.048715   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:29.048778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:29.080441   80228 cri.go:89] found id: ""
	I0814 17:40:29.080466   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.080474   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:29.080520   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:29.080584   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:29.112377   80228 cri.go:89] found id: ""
	I0814 17:40:29.112407   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.112418   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:29.112426   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:29.112493   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:29.145368   80228 cri.go:89] found id: ""
	I0814 17:40:29.145395   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.145403   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:29.145409   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:29.145471   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:29.177305   80228 cri.go:89] found id: ""
	I0814 17:40:29.177333   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.177341   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:29.177350   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:29.177366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:29.232156   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:29.232197   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:29.245286   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:29.245317   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:29.322257   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:29.322286   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:29.322302   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:29.397679   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:29.397714   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:27.512862   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.514756   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.945087   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.444743   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.760242   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.259825   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.935264   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:31.948380   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:31.948446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:31.978898   80228 cri.go:89] found id: ""
	I0814 17:40:31.978925   80228 logs.go:276] 0 containers: []
	W0814 17:40:31.978932   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:31.978939   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:31.978989   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:32.010652   80228 cri.go:89] found id: ""
	I0814 17:40:32.010681   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.010692   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:32.010699   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:32.010767   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:32.044821   80228 cri.go:89] found id: ""
	I0814 17:40:32.044852   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.044860   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:32.044866   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:32.044915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:32.076359   80228 cri.go:89] found id: ""
	I0814 17:40:32.076388   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.076398   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:32.076406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:32.076469   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:32.107652   80228 cri.go:89] found id: ""
	I0814 17:40:32.107680   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.107692   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:32.107709   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:32.107770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:32.138445   80228 cri.go:89] found id: ""
	I0814 17:40:32.138473   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.138484   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:32.138492   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:32.138558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:32.173771   80228 cri.go:89] found id: ""
	I0814 17:40:32.173794   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.173802   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:32.173807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:32.173857   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:32.206387   80228 cri.go:89] found id: ""
	I0814 17:40:32.206418   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.206429   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:32.206441   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:32.206454   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.258114   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:32.258148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:32.271984   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:32.272009   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:32.335423   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:32.335447   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:32.335464   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:32.411155   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:32.411206   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:34.975280   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:34.988098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:34.988176   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:35.022020   80228 cri.go:89] found id: ""
	I0814 17:40:35.022047   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.022062   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:35.022071   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:35.022124   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:35.055528   80228 cri.go:89] found id: ""
	I0814 17:40:35.055568   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.055578   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:35.055586   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:35.055647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:35.088373   80228 cri.go:89] found id: ""
	I0814 17:40:35.088404   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.088415   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:35.088422   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:35.088489   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:35.123162   80228 cri.go:89] found id: ""
	I0814 17:40:35.123188   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.123198   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:35.123206   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:35.123268   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:35.160240   80228 cri.go:89] found id: ""
	I0814 17:40:35.160267   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.160277   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:35.160286   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:35.160348   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:35.196249   80228 cri.go:89] found id: ""
	I0814 17:40:35.196276   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.196285   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:35.196293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:35.196359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:35.232564   80228 cri.go:89] found id: ""
	I0814 17:40:35.232588   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.232598   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:35.232606   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:35.232671   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:35.267357   80228 cri.go:89] found id: ""
	I0814 17:40:35.267383   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.267392   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:35.267399   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:35.267412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:35.279779   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:35.279806   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:35.347748   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:35.347769   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:35.347782   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:35.427900   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:35.427932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:35.468925   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:35.468953   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.013942   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.513138   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:36.944749   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.444665   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:36.760292   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.260430   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:38.020581   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:38.034985   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:38.035066   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:38.070206   80228 cri.go:89] found id: ""
	I0814 17:40:38.070231   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.070240   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:38.070246   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:38.070294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:38.103859   80228 cri.go:89] found id: ""
	I0814 17:40:38.103885   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.103893   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:38.103898   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:38.103947   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:38.138247   80228 cri.go:89] found id: ""
	I0814 17:40:38.138271   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.138278   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:38.138285   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:38.138345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:38.179475   80228 cri.go:89] found id: ""
	I0814 17:40:38.179511   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.179520   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:38.179526   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:38.179578   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:38.224892   80228 cri.go:89] found id: ""
	I0814 17:40:38.224922   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.224932   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:38.224940   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:38.224996   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:38.270456   80228 cri.go:89] found id: ""
	I0814 17:40:38.270485   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.270497   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:38.270504   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:38.270569   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:38.305267   80228 cri.go:89] found id: ""
	I0814 17:40:38.305300   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.305308   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:38.305315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:38.305387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:38.336942   80228 cri.go:89] found id: ""
	I0814 17:40:38.336978   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.336989   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:38.337000   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:38.337016   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:38.388618   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:38.388651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:38.403442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:38.403472   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:38.478225   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:38.478256   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:38.478273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:38.553400   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:38.553440   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:41.089947   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:41.101989   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:41.102070   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:41.133743   80228 cri.go:89] found id: ""
	I0814 17:40:41.133767   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.133774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:41.133780   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:41.133828   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:41.169671   80228 cri.go:89] found id: ""
	I0814 17:40:41.169706   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.169714   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:41.169721   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:41.169773   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:41.203425   80228 cri.go:89] found id: ""
	I0814 17:40:41.203451   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.203459   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:41.203475   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:41.203534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:41.237031   80228 cri.go:89] found id: ""
	I0814 17:40:41.237064   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.237075   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:41.237084   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:41.237149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:41.271095   80228 cri.go:89] found id: ""
	I0814 17:40:41.271120   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.271128   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:41.271134   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:41.271190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:41.303640   80228 cri.go:89] found id: ""
	I0814 17:40:41.303672   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.303684   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:41.303692   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:41.303755   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:37.013555   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.013733   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.013910   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.943472   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:43.944582   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.261795   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:43.759672   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.336010   80228 cri.go:89] found id: ""
	I0814 17:40:41.336047   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.336062   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:41.336071   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:41.336140   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:41.370098   80228 cri.go:89] found id: ""
	I0814 17:40:41.370133   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.370143   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:41.370154   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:41.370168   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:41.420760   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:41.420794   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:41.433651   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:41.433678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:41.506623   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:41.506644   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:41.506657   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:41.591390   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:41.591426   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:44.130649   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:44.144362   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:44.144428   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:44.178485   80228 cri.go:89] found id: ""
	I0814 17:40:44.178516   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.178527   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:44.178535   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:44.178600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:44.214231   80228 cri.go:89] found id: ""
	I0814 17:40:44.214260   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.214268   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:44.214274   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:44.214336   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:44.248483   80228 cri.go:89] found id: ""
	I0814 17:40:44.248513   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.248524   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:44.248531   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:44.248600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:44.282445   80228 cri.go:89] found id: ""
	I0814 17:40:44.282472   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.282481   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:44.282493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:44.282560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:44.315141   80228 cri.go:89] found id: ""
	I0814 17:40:44.315169   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.315190   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:44.315198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:44.315259   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:44.346756   80228 cri.go:89] found id: ""
	I0814 17:40:44.346781   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.346789   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:44.346795   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:44.346853   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:44.378143   80228 cri.go:89] found id: ""
	I0814 17:40:44.378172   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.378183   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:44.378191   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:44.378255   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:44.411526   80228 cri.go:89] found id: ""
	I0814 17:40:44.411557   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.411567   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:44.411578   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:44.411592   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:44.459873   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:44.459913   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:44.473112   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:44.473148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:44.547514   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:44.547546   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:44.547579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:44.630377   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:44.630415   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:43.512113   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.512590   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.945080   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:47.946506   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.760626   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:48.260015   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.260186   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:47.173094   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:47.185854   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:47.185927   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:47.228755   80228 cri.go:89] found id: ""
	I0814 17:40:47.228781   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.228788   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:47.228795   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:47.228851   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:47.264986   80228 cri.go:89] found id: ""
	I0814 17:40:47.265020   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.265031   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:47.265037   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:47.265100   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:47.296900   80228 cri.go:89] found id: ""
	I0814 17:40:47.296929   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.296940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:47.296947   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:47.297009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:47.328120   80228 cri.go:89] found id: ""
	I0814 17:40:47.328147   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.328155   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:47.328161   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:47.328210   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:47.364147   80228 cri.go:89] found id: ""
	I0814 17:40:47.364171   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.364178   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:47.364184   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:47.364238   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:47.400466   80228 cri.go:89] found id: ""
	I0814 17:40:47.400493   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.400501   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:47.400507   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:47.400562   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:47.432681   80228 cri.go:89] found id: ""
	I0814 17:40:47.432713   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.432724   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:47.432732   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:47.432801   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:47.465466   80228 cri.go:89] found id: ""
	I0814 17:40:47.465498   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.465510   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:47.465522   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:47.465536   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:47.502076   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:47.502114   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:47.554451   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:47.554488   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:47.567658   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:47.567690   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:47.635805   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:47.635829   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:47.635844   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:50.215353   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:50.227723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:50.227795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:50.258250   80228 cri.go:89] found id: ""
	I0814 17:40:50.258276   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.258287   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:50.258296   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:50.258363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:50.291371   80228 cri.go:89] found id: ""
	I0814 17:40:50.291406   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.291416   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:50.291423   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:50.291479   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:50.321449   80228 cri.go:89] found id: ""
	I0814 17:40:50.321473   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.321481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:50.321486   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:50.321545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:50.351752   80228 cri.go:89] found id: ""
	I0814 17:40:50.351780   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.351791   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:50.351799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:50.351856   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:50.382022   80228 cri.go:89] found id: ""
	I0814 17:40:50.382050   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.382057   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:50.382063   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:50.382118   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:50.414057   80228 cri.go:89] found id: ""
	I0814 17:40:50.414083   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.414091   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:50.414098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:50.414156   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:50.447508   80228 cri.go:89] found id: ""
	I0814 17:40:50.447530   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.447537   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:50.447543   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:50.447606   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:50.487401   80228 cri.go:89] found id: ""
	I0814 17:40:50.487425   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.487434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:50.487442   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:50.487455   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:50.524404   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:50.524439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:50.578220   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:50.578256   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:50.591405   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:50.591431   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:50.657727   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:50.657750   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:50.657762   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:47.514490   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.012588   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.445363   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:52.944903   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:52.760728   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:54.760918   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:53.237985   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:53.250502   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:53.250572   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:53.285728   80228 cri.go:89] found id: ""
	I0814 17:40:53.285763   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.285774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:53.285784   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:53.285848   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:53.318195   80228 cri.go:89] found id: ""
	I0814 17:40:53.318231   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.318243   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:53.318252   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:53.318317   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:53.350259   80228 cri.go:89] found id: ""
	I0814 17:40:53.350291   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.350302   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:53.350310   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:53.350385   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:53.385894   80228 cri.go:89] found id: ""
	I0814 17:40:53.385920   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.385928   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:53.385934   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:53.385983   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:53.420851   80228 cri.go:89] found id: ""
	I0814 17:40:53.420878   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.420890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:53.420897   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:53.420963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:53.458332   80228 cri.go:89] found id: ""
	I0814 17:40:53.458370   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.458381   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:53.458392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:53.458465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:53.489719   80228 cri.go:89] found id: ""
	I0814 17:40:53.489750   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.489759   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:53.489765   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:53.489820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:53.522942   80228 cri.go:89] found id: ""
	I0814 17:40:53.522977   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.522988   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:53.522998   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:53.523013   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:53.599450   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:53.599492   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:53.637225   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:53.637254   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:53.688605   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:53.688647   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:53.704601   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:53.704633   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:53.775046   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.275201   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:56.288406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:56.288463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:52.013747   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:54.513735   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:56.514335   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:55.445462   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:57.447142   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:59.946025   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:57.261047   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:59.760136   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:56.322862   80228 cri.go:89] found id: ""
	I0814 17:40:56.322891   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.322899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:56.322905   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:56.322954   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:56.356214   80228 cri.go:89] found id: ""
	I0814 17:40:56.356243   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.356262   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:56.356268   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:56.356338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:56.388877   80228 cri.go:89] found id: ""
	I0814 17:40:56.388900   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.388909   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:56.388915   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:56.388967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:56.422552   80228 cri.go:89] found id: ""
	I0814 17:40:56.422577   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.422585   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:56.422590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:56.422649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:56.456995   80228 cri.go:89] found id: ""
	I0814 17:40:56.457018   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.457026   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:56.457031   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:56.457079   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:56.495745   80228 cri.go:89] found id: ""
	I0814 17:40:56.495772   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.495788   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:56.495797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:56.495868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:56.529139   80228 cri.go:89] found id: ""
	I0814 17:40:56.529171   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.529179   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:56.529185   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:56.529237   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:56.561377   80228 cri.go:89] found id: ""
	I0814 17:40:56.561406   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.561414   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:56.561424   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:56.561439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:56.601504   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:56.601537   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:56.653369   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:56.653403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:56.666117   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:56.666144   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:56.731921   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.731949   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:56.731963   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.315712   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:59.328425   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:59.328486   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:59.364056   80228 cri.go:89] found id: ""
	I0814 17:40:59.364080   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.364088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:59.364094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:59.364151   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:59.398948   80228 cri.go:89] found id: ""
	I0814 17:40:59.398971   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.398978   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:59.398984   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:59.399029   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:59.430301   80228 cri.go:89] found id: ""
	I0814 17:40:59.430327   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.430335   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:59.430341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:59.430406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:59.465278   80228 cri.go:89] found id: ""
	I0814 17:40:59.465301   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.465309   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:59.465315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:59.465372   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:59.497544   80228 cri.go:89] found id: ""
	I0814 17:40:59.497575   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.497586   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:59.497595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:59.497659   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:59.529463   80228 cri.go:89] found id: ""
	I0814 17:40:59.529494   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.529506   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:59.529513   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:59.529587   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:59.562448   80228 cri.go:89] found id: ""
	I0814 17:40:59.562477   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.562487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:59.562495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:59.562609   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:59.594059   80228 cri.go:89] found id: ""
	I0814 17:40:59.594089   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.594103   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:59.594112   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:59.594123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.672139   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:59.672172   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:59.710714   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:59.710743   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:59.762645   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:59.762676   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:59.776006   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:59.776033   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:59.838187   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:59.013030   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:01.013280   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.445595   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:04.944484   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.260244   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:04.760862   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.338964   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:02.351381   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:02.351460   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:02.383206   80228 cri.go:89] found id: ""
	I0814 17:41:02.383235   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.383244   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:02.383250   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:02.383310   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:02.417016   80228 cri.go:89] found id: ""
	I0814 17:41:02.417042   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.417049   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:02.417055   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:02.417111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:02.451936   80228 cri.go:89] found id: ""
	I0814 17:41:02.451964   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.451974   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:02.451982   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:02.452042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:02.489896   80228 cri.go:89] found id: ""
	I0814 17:41:02.489927   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.489937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:02.489945   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:02.490011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:02.524273   80228 cri.go:89] found id: ""
	I0814 17:41:02.524308   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.524339   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:02.524346   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:02.524409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:02.558813   80228 cri.go:89] found id: ""
	I0814 17:41:02.558842   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.558850   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:02.558861   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:02.558917   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:02.592704   80228 cri.go:89] found id: ""
	I0814 17:41:02.592733   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.592747   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:02.592753   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:02.592818   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:02.625250   80228 cri.go:89] found id: ""
	I0814 17:41:02.625277   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.625288   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:02.625299   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:02.625312   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:02.677577   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:02.677613   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:02.691407   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:02.691439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:02.756797   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:02.756869   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:02.756888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:02.830803   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:02.830842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.370085   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:05.385272   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:05.385342   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:05.421775   80228 cri.go:89] found id: ""
	I0814 17:41:05.421799   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.421806   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:05.421812   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:05.421860   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:05.457054   80228 cri.go:89] found id: ""
	I0814 17:41:05.457083   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.457093   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:05.457100   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:05.457153   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:05.489290   80228 cri.go:89] found id: ""
	I0814 17:41:05.489330   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.489338   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:05.489345   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:05.489392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:05.527066   80228 cri.go:89] found id: ""
	I0814 17:41:05.527091   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.527098   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:05.527105   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:05.527155   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:05.563882   80228 cri.go:89] found id: ""
	I0814 17:41:05.563915   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.563925   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:05.563931   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:05.563982   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:05.601837   80228 cri.go:89] found id: ""
	I0814 17:41:05.601863   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.601871   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:05.601879   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:05.601940   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:05.633503   80228 cri.go:89] found id: ""
	I0814 17:41:05.633531   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.633539   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:05.633545   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:05.633615   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:05.668281   80228 cri.go:89] found id: ""
	I0814 17:41:05.668312   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.668324   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:05.668335   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:05.668349   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:05.747214   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:05.747249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.784408   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:05.784441   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:05.835067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:05.835103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:05.847938   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:05.847966   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:05.917404   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:03.513033   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:05.514476   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:06.944595   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:08.944850   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:07.260430   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:09.762513   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:08.417559   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:08.431092   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:08.431165   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:08.465357   80228 cri.go:89] found id: ""
	I0814 17:41:08.465515   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.465543   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:08.465560   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:08.465675   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:08.499085   80228 cri.go:89] found id: ""
	I0814 17:41:08.499114   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.499123   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:08.499129   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:08.499180   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:08.533881   80228 cri.go:89] found id: ""
	I0814 17:41:08.533909   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.533917   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:08.533922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:08.533972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:08.570503   80228 cri.go:89] found id: ""
	I0814 17:41:08.570549   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.570560   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:08.570572   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:08.570649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:08.602557   80228 cri.go:89] found id: ""
	I0814 17:41:08.602599   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.602610   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:08.602691   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:08.602785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:08.636174   80228 cri.go:89] found id: ""
	I0814 17:41:08.636199   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.636206   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:08.636213   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:08.636261   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:08.672774   80228 cri.go:89] found id: ""
	I0814 17:41:08.672804   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.672815   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:08.672823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:08.672890   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:08.705535   80228 cri.go:89] found id: ""
	I0814 17:41:08.705590   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.705605   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:08.705622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:08.705641   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:08.744315   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:08.744341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:08.794632   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:08.794666   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:08.808089   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:08.808117   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:08.876417   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:08.876436   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:08.876452   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:08.013688   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:10.512639   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:11.444206   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:13.944056   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:12.260065   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:14.759640   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:11.458562   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:11.470905   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:11.470965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:11.505992   80228 cri.go:89] found id: ""
	I0814 17:41:11.506023   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.506036   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:11.506044   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:11.506112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:11.540893   80228 cri.go:89] found id: ""
	I0814 17:41:11.540922   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.540932   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:11.540945   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:11.541001   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:11.575423   80228 cri.go:89] found id: ""
	I0814 17:41:11.575448   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.575455   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:11.575462   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:11.575520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:11.608126   80228 cri.go:89] found id: ""
	I0814 17:41:11.608155   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.608164   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:11.608171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:11.608222   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:11.640165   80228 cri.go:89] found id: ""
	I0814 17:41:11.640190   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.640198   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:11.640204   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:11.640263   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:11.674425   80228 cri.go:89] found id: ""
	I0814 17:41:11.674446   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.674455   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:11.674460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:11.674513   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:11.707448   80228 cri.go:89] found id: ""
	I0814 17:41:11.707477   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.707487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:11.707493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:11.707555   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:11.744309   80228 cri.go:89] found id: ""
	I0814 17:41:11.744338   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.744346   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:11.744363   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:11.744375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:11.824165   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:11.824196   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:11.862013   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:11.862039   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:11.913862   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:11.913902   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:11.927147   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:11.927178   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:11.998403   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.498590   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:14.512847   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:14.512938   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:14.549255   80228 cri.go:89] found id: ""
	I0814 17:41:14.549288   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.549306   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:14.549316   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:14.549382   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:14.588917   80228 cri.go:89] found id: ""
	I0814 17:41:14.588948   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.588956   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:14.588963   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:14.589012   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:14.622581   80228 cri.go:89] found id: ""
	I0814 17:41:14.622611   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.622621   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:14.622628   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:14.622693   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:14.656029   80228 cri.go:89] found id: ""
	I0814 17:41:14.656056   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.656064   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:14.656070   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:14.656117   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:14.687502   80228 cri.go:89] found id: ""
	I0814 17:41:14.687527   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.687536   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:14.687541   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:14.687614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:14.720682   80228 cri.go:89] found id: ""
	I0814 17:41:14.720713   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.720721   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:14.720728   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:14.720778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:14.752482   80228 cri.go:89] found id: ""
	I0814 17:41:14.752511   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.752520   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:14.752525   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:14.752577   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:14.792980   80228 cri.go:89] found id: ""
	I0814 17:41:14.793004   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.793014   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:14.793026   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:14.793042   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:14.845259   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:14.845297   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:14.858530   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:14.858556   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:14.931025   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.931054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:14.931067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:15.008081   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:15.008115   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:13.014174   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:15.512768   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:16.444772   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:16.444802   79521 pod_ready.go:81] duration metric: took 4m0.006448573s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	E0814 17:41:16.444810   79521 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 17:41:16.444817   79521 pod_ready.go:38] duration metric: took 4m5.044051569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:41:16.444832   79521 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:41:16.444858   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:16.444901   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:16.499710   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:16.499742   79521 cri.go:89] found id: ""
	I0814 17:41:16.499751   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:16.499815   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.504467   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:16.504544   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:16.546815   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:16.546842   79521 cri.go:89] found id: ""
	I0814 17:41:16.546851   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:16.546905   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.550917   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:16.550986   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:16.590195   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:16.590216   79521 cri.go:89] found id: ""
	I0814 17:41:16.590224   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:16.590267   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.594123   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:16.594196   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:16.631058   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:16.631091   79521 cri.go:89] found id: ""
	I0814 17:41:16.631101   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:16.631163   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.635151   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:16.635226   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:16.671555   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:16.671582   79521 cri.go:89] found id: ""
	I0814 17:41:16.671592   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:16.671657   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.675790   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:16.675847   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:16.713131   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:16.713157   79521 cri.go:89] found id: ""
	I0814 17:41:16.713165   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:16.713217   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.717296   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:16.717354   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:16.756212   79521 cri.go:89] found id: ""
	I0814 17:41:16.756245   79521 logs.go:276] 0 containers: []
	W0814 17:41:16.756255   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:16.756261   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:16.756324   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:16.802379   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:16.802411   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:16.802417   79521 cri.go:89] found id: ""
	I0814 17:41:16.802431   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:16.802492   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.807105   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.811210   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:16.811241   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:16.852490   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:16.852526   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:16.894384   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:16.894425   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:16.929919   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:16.929949   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:16.965031   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:16.965061   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:17.468878   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.468945   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.482799   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.482826   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:17.610874   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:17.610904   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:17.649292   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:17.649322   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:17.691014   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:17.691045   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:17.749218   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.749254   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.794240   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.794280   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.868805   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:17.868851   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:16.760328   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:18.760369   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:17.544873   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:17.557699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:17.557791   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:17.600314   80228 cri.go:89] found id: ""
	I0814 17:41:17.600347   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.600360   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:17.600370   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:17.600441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:17.634873   80228 cri.go:89] found id: ""
	I0814 17:41:17.634902   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.634914   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:17.634923   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:17.634986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:17.670521   80228 cri.go:89] found id: ""
	I0814 17:41:17.670552   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.670563   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:17.670571   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:17.670647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:17.705587   80228 cri.go:89] found id: ""
	I0814 17:41:17.705612   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.705626   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:17.705632   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:17.705682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:17.768178   80228 cri.go:89] found id: ""
	I0814 17:41:17.768207   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.768218   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:17.768226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:17.768290   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:17.804692   80228 cri.go:89] found id: ""
	I0814 17:41:17.804721   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.804729   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:17.804735   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:17.804795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:17.847994   80228 cri.go:89] found id: ""
	I0814 17:41:17.848030   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.848041   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:17.848052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:17.848122   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:17.883905   80228 cri.go:89] found id: ""
	I0814 17:41:17.883935   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.883944   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:17.883953   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.883965   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.931481   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.931522   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.983315   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.983363   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.996941   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.996981   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:18.067254   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:18.067279   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:18.067295   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:20.642099   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.655941   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.656014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.692525   80228 cri.go:89] found id: ""
	I0814 17:41:20.692554   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.692565   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:20.692577   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.692634   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.727721   80228 cri.go:89] found id: ""
	I0814 17:41:20.727755   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.727769   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:20.727778   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.727845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.770441   80228 cri.go:89] found id: ""
	I0814 17:41:20.770471   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.770481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:20.770488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.770550   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.807932   80228 cri.go:89] found id: ""
	I0814 17:41:20.807961   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.807968   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:20.807975   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.808030   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.849919   80228 cri.go:89] found id: ""
	I0814 17:41:20.849944   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.849963   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:20.849970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.850045   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.887351   80228 cri.go:89] found id: ""
	I0814 17:41:20.887382   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.887393   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:20.887401   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.887465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.921284   80228 cri.go:89] found id: ""
	I0814 17:41:20.921310   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.921321   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.921328   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:20.921409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:20.955238   80228 cri.go:89] found id: ""
	I0814 17:41:20.955267   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.955278   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:20.955288   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.955314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:21.024544   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:21.024565   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.024579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:21.103987   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.104019   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.145515   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.145550   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.197307   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.197346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.514682   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:20.015152   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:20.429364   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.445075   79521 api_server.go:72] duration metric: took 4m16.759338748s to wait for apiserver process to appear ...
	I0814 17:41:20.445102   79521 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:41:20.445133   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.445179   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.477630   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:20.477655   79521 cri.go:89] found id: ""
	I0814 17:41:20.477663   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:20.477714   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.481667   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.481728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.514443   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:20.514465   79521 cri.go:89] found id: ""
	I0814 17:41:20.514473   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:20.514516   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.518344   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.518401   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.559625   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:20.559647   79521 cri.go:89] found id: ""
	I0814 17:41:20.559653   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:20.559706   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.564137   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.564203   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.603504   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:20.603531   79521 cri.go:89] found id: ""
	I0814 17:41:20.603540   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:20.603602   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.608260   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.608334   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.641466   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:20.641487   79521 cri.go:89] found id: ""
	I0814 17:41:20.641494   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:20.641538   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.645566   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.645625   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.685003   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:20.685032   79521 cri.go:89] found id: ""
	I0814 17:41:20.685042   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:20.685104   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.690347   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.690429   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.733753   79521 cri.go:89] found id: ""
	I0814 17:41:20.733782   79521 logs.go:276] 0 containers: []
	W0814 17:41:20.733793   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.733800   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:20.733862   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:20.781659   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:20.781683   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:20.781689   79521 cri.go:89] found id: ""
	I0814 17:41:20.781697   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:20.781753   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.786293   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.790358   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.790377   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:20.916473   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:20.916513   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:20.968706   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:20.968743   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:21.003507   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:21.003546   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:21.049909   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:21.049961   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:21.090052   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:21.090080   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:21.129551   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.129585   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.174792   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.174828   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.247392   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.247440   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:21.261095   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:21.261129   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:21.306583   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:21.306616   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:21.339602   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:21.339642   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:21.397695   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.397732   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.301807   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:41:24.306392   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 200:
	ok
	I0814 17:41:24.307364   79521 api_server.go:141] control plane version: v1.31.0
	I0814 17:41:24.307390   79521 api_server.go:131] duration metric: took 3.862280551s to wait for apiserver health ...
	I0814 17:41:24.307398   79521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:41:24.307418   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:24.307463   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:24.342519   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:24.342552   79521 cri.go:89] found id: ""
	I0814 17:41:24.342561   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:24.342627   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.346361   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:24.346422   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:24.386973   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:24.387001   79521 cri.go:89] found id: ""
	I0814 17:41:24.387012   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:24.387066   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.390942   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:24.390999   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:24.426841   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:24.426863   79521 cri.go:89] found id: ""
	I0814 17:41:24.426872   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:24.426927   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.430856   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:24.430917   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:24.467024   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:24.467050   79521 cri.go:89] found id: ""
	I0814 17:41:24.467059   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:24.467117   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.471659   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:24.471728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:24.506759   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:24.506786   79521 cri.go:89] found id: ""
	I0814 17:41:24.506799   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:24.506857   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.511660   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:24.511728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:24.547768   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:24.547795   79521 cri.go:89] found id: ""
	I0814 17:41:24.547805   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:24.547862   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.552881   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:24.552941   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:24.588519   79521 cri.go:89] found id: ""
	I0814 17:41:24.588544   79521 logs.go:276] 0 containers: []
	W0814 17:41:24.588551   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:24.588557   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:24.588602   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:24.624604   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:24.624626   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:24.624630   79521 cri.go:89] found id: ""
	I0814 17:41:24.624636   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:24.624691   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.628703   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.632611   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:24.632636   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:24.671903   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:24.671935   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:24.709821   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.709851   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:25.107477   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:25.107515   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:25.221012   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:25.221041   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:20.760924   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:23.259780   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.260347   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:23.712584   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:23.726467   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:23.726545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:23.762871   80228 cri.go:89] found id: ""
	I0814 17:41:23.762906   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.762916   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:23.762922   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:23.762972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:23.800068   80228 cri.go:89] found id: ""
	I0814 17:41:23.800096   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.800105   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:23.800113   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:23.800173   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:23.834913   80228 cri.go:89] found id: ""
	I0814 17:41:23.834945   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.834956   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:23.834963   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:23.835022   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:23.871196   80228 cri.go:89] found id: ""
	I0814 17:41:23.871222   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.871233   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:23.871240   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:23.871294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:23.907830   80228 cri.go:89] found id: ""
	I0814 17:41:23.907854   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.907862   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:23.907868   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:23.907926   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:23.941110   80228 cri.go:89] found id: ""
	I0814 17:41:23.941133   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.941141   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:23.941146   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:23.941197   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:23.973602   80228 cri.go:89] found id: ""
	I0814 17:41:23.973631   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.973649   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:23.973655   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:23.973710   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:24.007398   80228 cri.go:89] found id: ""
	I0814 17:41:24.007436   80228 logs.go:276] 0 containers: []
	W0814 17:41:24.007450   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:24.007462   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:24.007478   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:24.061830   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:24.061867   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:24.075012   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:24.075046   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:24.148666   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:24.148692   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.148703   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.230208   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:24.230248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:22.513616   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.013383   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.272397   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:25.272429   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:25.317574   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:25.317603   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:25.352239   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:25.352271   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:25.409997   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:25.410030   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:25.443875   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:25.443899   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:25.490987   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:25.491023   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:25.563495   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:25.563531   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:25.577305   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:25.577345   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:28.147823   79521 system_pods.go:59] 8 kube-system pods found
	I0814 17:41:28.147855   79521 system_pods.go:61] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running
	I0814 17:41:28.147860   79521 system_pods.go:61] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running
	I0814 17:41:28.147864   79521 system_pods.go:61] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running
	I0814 17:41:28.147867   79521 system_pods.go:61] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running
	I0814 17:41:28.147870   79521 system_pods.go:61] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running
	I0814 17:41:28.147874   79521 system_pods.go:61] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running
	I0814 17:41:28.147879   79521 system_pods.go:61] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:41:28.147883   79521 system_pods.go:61] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running
	I0814 17:41:28.147890   79521 system_pods.go:74] duration metric: took 3.840486938s to wait for pod list to return data ...
	I0814 17:41:28.147898   79521 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:41:28.150377   79521 default_sa.go:45] found service account: "default"
	I0814 17:41:28.150398   79521 default_sa.go:55] duration metric: took 2.493777ms for default service account to be created ...
	I0814 17:41:28.150406   79521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:41:28.154470   79521 system_pods.go:86] 8 kube-system pods found
	I0814 17:41:28.154494   79521 system_pods.go:89] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running
	I0814 17:41:28.154500   79521 system_pods.go:89] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running
	I0814 17:41:28.154504   79521 system_pods.go:89] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running
	I0814 17:41:28.154510   79521 system_pods.go:89] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running
	I0814 17:41:28.154514   79521 system_pods.go:89] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running
	I0814 17:41:28.154519   79521 system_pods.go:89] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running
	I0814 17:41:28.154525   79521 system_pods.go:89] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:41:28.154530   79521 system_pods.go:89] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running
	I0814 17:41:28.154537   79521 system_pods.go:126] duration metric: took 4.125964ms to wait for k8s-apps to be running ...
	I0814 17:41:28.154544   79521 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:41:28.154585   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:28.170494   79521 system_svc.go:56] duration metric: took 15.940728ms WaitForService to wait for kubelet
	I0814 17:41:28.170524   79521 kubeadm.go:582] duration metric: took 4m24.484791018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:41:28.170545   79521 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:41:28.173368   79521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:41:28.173395   79521 node_conditions.go:123] node cpu capacity is 2
	I0814 17:41:28.173407   79521 node_conditions.go:105] duration metric: took 2.858344ms to run NodePressure ...
	I0814 17:41:28.173417   79521 start.go:241] waiting for startup goroutines ...
	I0814 17:41:28.173424   79521 start.go:246] waiting for cluster config update ...
	I0814 17:41:28.173435   79521 start.go:255] writing updated cluster config ...
	I0814 17:41:28.173730   79521 ssh_runner.go:195] Run: rm -f paused
	I0814 17:41:28.219460   79521 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:41:28.221461   79521 out.go:177] * Done! kubectl is now configured to use "embed-certs-309673" cluster and "default" namespace by default
	I0814 17:41:27.761580   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:30.260454   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:26.776204   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:26.789057   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:26.789132   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:26.822531   80228 cri.go:89] found id: ""
	I0814 17:41:26.822564   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.822575   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:26.822590   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:26.822651   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:26.855314   80228 cri.go:89] found id: ""
	I0814 17:41:26.855353   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.855365   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:26.855372   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:26.855434   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:26.889389   80228 cri.go:89] found id: ""
	I0814 17:41:26.889413   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.889421   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:26.889427   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:26.889485   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:26.925478   80228 cri.go:89] found id: ""
	I0814 17:41:26.925500   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.925508   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:26.925514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:26.925560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:26.957012   80228 cri.go:89] found id: ""
	I0814 17:41:26.957042   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.957053   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:26.957061   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:26.957114   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:26.989358   80228 cri.go:89] found id: ""
	I0814 17:41:26.989388   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.989399   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:26.989406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:26.989468   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:27.024761   80228 cri.go:89] found id: ""
	I0814 17:41:27.024786   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.024805   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:27.024830   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:27.024895   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:27.059172   80228 cri.go:89] found id: ""
	I0814 17:41:27.059204   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.059215   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:27.059226   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:27.059240   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.096123   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:27.096151   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:27.147689   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:27.147719   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:27.161454   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:27.161483   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:27.234644   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:27.234668   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:27.234680   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:29.817428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:29.831731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:29.831811   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:29.868531   80228 cri.go:89] found id: ""
	I0814 17:41:29.868567   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.868577   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:29.868585   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:29.868657   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:29.913578   80228 cri.go:89] found id: ""
	I0814 17:41:29.913602   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.913611   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:29.913617   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:29.913677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:29.963916   80228 cri.go:89] found id: ""
	I0814 17:41:29.963939   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.963946   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:29.963952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:29.964011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:30.016735   80228 cri.go:89] found id: ""
	I0814 17:41:30.016763   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.016773   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:30.016781   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:30.016841   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:30.048852   80228 cri.go:89] found id: ""
	I0814 17:41:30.048880   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.048890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:30.048898   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:30.048960   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:30.080291   80228 cri.go:89] found id: ""
	I0814 17:41:30.080324   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.080335   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:30.080343   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:30.080506   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:30.113876   80228 cri.go:89] found id: ""
	I0814 17:41:30.113904   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.113914   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:30.113921   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:30.113984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:30.147568   80228 cri.go:89] found id: ""
	I0814 17:41:30.147594   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.147604   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:30.147614   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:30.147627   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:30.197596   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:30.197630   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:30.210576   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:30.210602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:30.277711   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:30.277731   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:30.277746   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:30.356556   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:30.356590   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.013699   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:29.014020   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:31.512974   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:32.760328   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:35.254066   79871 pod_ready.go:81] duration metric: took 4m0.000392709s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" ...
	E0814 17:41:35.254095   79871 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 17:41:35.254112   79871 pod_ready.go:38] duration metric: took 4m12.044429915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:41:35.254137   79871 kubeadm.go:597] duration metric: took 4m20.041916203s to restartPrimaryControlPlane
	W0814 17:41:35.254189   79871 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:35.254218   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:32.892697   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:32.909435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:32.909497   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:32.945055   80228 cri.go:89] found id: ""
	I0814 17:41:32.945080   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.945088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:32.945094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:32.945150   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:32.979266   80228 cri.go:89] found id: ""
	I0814 17:41:32.979294   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.979305   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:32.979312   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:32.979398   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:33.014260   80228 cri.go:89] found id: ""
	I0814 17:41:33.014286   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.014294   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:33.014299   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:33.014351   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:33.047590   80228 cri.go:89] found id: ""
	I0814 17:41:33.047622   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.047633   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:33.047646   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:33.047711   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:33.081258   80228 cri.go:89] found id: ""
	I0814 17:41:33.081294   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.081328   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:33.081337   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:33.081403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:33.112209   80228 cri.go:89] found id: ""
	I0814 17:41:33.112237   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.112247   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:33.112254   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:33.112318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:33.143854   80228 cri.go:89] found id: ""
	I0814 17:41:33.143892   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.143904   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:33.143913   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:33.143977   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:33.175147   80228 cri.go:89] found id: ""
	I0814 17:41:33.175190   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.175201   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:33.175212   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:33.175226   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:33.212877   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:33.212908   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:33.268067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:33.268103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:33.281357   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:33.281386   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:33.350233   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:33.350257   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:33.350269   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:35.929498   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:35.942290   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:35.942354   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:35.975782   80228 cri.go:89] found id: ""
	I0814 17:41:35.975809   80228 logs.go:276] 0 containers: []
	W0814 17:41:35.975818   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:35.975826   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:35.975886   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:36.008165   80228 cri.go:89] found id: ""
	I0814 17:41:36.008191   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.008200   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:36.008206   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:36.008262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:36.044912   80228 cri.go:89] found id: ""
	I0814 17:41:36.044937   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.044945   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:36.044954   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:36.045002   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:36.078068   80228 cri.go:89] found id: ""
	I0814 17:41:36.078096   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.078108   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:36.078116   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:36.078179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:36.110429   80228 cri.go:89] found id: ""
	I0814 17:41:36.110456   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.110467   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:36.110480   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:36.110540   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:36.142086   80228 cri.go:89] found id: ""
	I0814 17:41:36.142111   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.142119   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:36.142125   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:36.142186   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:36.172738   80228 cri.go:89] found id: ""
	I0814 17:41:36.172761   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.172769   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:36.172775   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:36.172831   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:36.204345   80228 cri.go:89] found id: ""
	I0814 17:41:36.204368   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.204376   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:36.204388   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:36.204403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:36.216667   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:36.216689   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:36.279509   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:36.279528   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:36.279540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:33.513591   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:36.013400   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:36.360411   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:36.360447   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:36.398193   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:36.398230   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:38.952415   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:38.968484   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:38.968554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:39.002450   80228 cri.go:89] found id: ""
	I0814 17:41:39.002479   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.002486   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:39.002493   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:39.002551   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:39.035840   80228 cri.go:89] found id: ""
	I0814 17:41:39.035868   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.035876   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:39.035882   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:39.035934   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:39.069900   80228 cri.go:89] found id: ""
	I0814 17:41:39.069929   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.069940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:39.069946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:39.069999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:39.104657   80228 cri.go:89] found id: ""
	I0814 17:41:39.104681   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.104689   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:39.104695   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:39.104751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:39.137279   80228 cri.go:89] found id: ""
	I0814 17:41:39.137312   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.137322   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:39.137330   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:39.137403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:39.170377   80228 cri.go:89] found id: ""
	I0814 17:41:39.170414   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.170424   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:39.170430   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:39.170491   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:39.205742   80228 cri.go:89] found id: ""
	I0814 17:41:39.205779   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.205790   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:39.205796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:39.205850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:39.239954   80228 cri.go:89] found id: ""
	I0814 17:41:39.239979   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.239987   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:39.239994   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:39.240011   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:39.276587   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:39.276619   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:39.329286   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:39.329322   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:39.342232   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:39.342257   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:39.411043   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:39.411063   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:39.411075   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:38.013562   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:40.013740   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:41.994479   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:42.007736   80228 kubeadm.go:597] duration metric: took 4m4.488869114s to restartPrimaryControlPlane
	W0814 17:41:42.007822   80228 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:42.007871   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:42.513259   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:45.013455   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:46.541593   80228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.533697889s)
	I0814 17:41:46.541676   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:46.556181   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:41:46.565943   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:41:46.575481   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:41:46.575501   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:41:46.575549   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:41:46.585143   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:41:46.585202   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:41:46.595157   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:41:46.604539   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:41:46.604600   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:41:46.613345   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.622186   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:41:46.622242   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.631221   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:41:46.640649   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:41:46.640706   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:41:46.650161   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:41:46.724104   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:41:46.724182   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:41:46.860463   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:41:46.860606   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:41:46.860725   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:41:47.036697   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:41:47.038444   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:41:47.038561   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:41:47.038670   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:41:47.038775   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:41:47.038860   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:41:47.038973   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:41:47.039067   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:41:47.039172   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:41:47.039256   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:41:47.039359   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:41:47.039456   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:41:47.039516   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:41:47.039587   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:41:47.278696   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:41:47.664300   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:41:47.988137   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:41:48.076560   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:41:48.093447   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:41:48.094656   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:41:48.094793   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:41:48.253225   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:41:48.255034   80228 out.go:204]   - Booting up control plane ...
	I0814 17:41:48.255160   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:41:48.259041   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:41:48.260074   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:41:48.260862   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:41:48.262910   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:41:47.513415   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:50.012937   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:52.013499   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:54.514150   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:57.013146   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:59.013393   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:01.014185   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:01.441261   79871 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.187019598s)
	I0814 17:42:01.441333   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:01.457213   79871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:42:01.466802   79871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:42:01.475719   79871 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:42:01.475736   79871 kubeadm.go:157] found existing configuration files:
	
	I0814 17:42:01.475784   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 17:42:01.484555   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:42:01.484618   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:42:01.493956   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 17:42:01.503873   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:42:01.503923   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:42:01.514710   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 17:42:01.524473   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:42:01.524531   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:42:01.534749   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 17:42:01.544491   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:42:01.544558   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:42:01.555481   79871 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:42:01.599801   79871 kubeadm.go:310] W0814 17:42:01.575622    2598 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:01.600615   79871 kubeadm.go:310] W0814 17:42:01.576625    2598 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:01.703064   79871 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:42:03.513007   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:05.514241   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:09.627141   79871 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:42:09.627216   79871 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:42:09.627344   79871 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:42:09.627480   79871 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:42:09.627638   79871 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:42:09.627717   79871 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:42:09.629272   79871 out.go:204]   - Generating certificates and keys ...
	I0814 17:42:09.629370   79871 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:42:09.629472   79871 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:42:09.629592   79871 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:42:09.629712   79871 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:42:09.629780   79871 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:42:09.629826   79871 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:42:09.629898   79871 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:42:09.629963   79871 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:42:09.630076   79871 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:42:09.630198   79871 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:42:09.630253   79871 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:42:09.630314   79871 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:42:09.630357   79871 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:42:09.630412   79871 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:42:09.630457   79871 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:42:09.630509   79871 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:42:09.630560   79871 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:42:09.630629   79871 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:42:09.630688   79871 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:42:09.632664   79871 out.go:204]   - Booting up control plane ...
	I0814 17:42:09.632763   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:42:09.632878   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:42:09.632963   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:42:09.633100   79871 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:42:09.633207   79871 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:42:09.633252   79871 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:42:09.633412   79871 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:42:09.633542   79871 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:42:09.633624   79871 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004125702s
	I0814 17:42:09.633727   79871 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:42:09.633814   79871 kubeadm.go:310] [api-check] The API server is healthy after 4.501648596s
	I0814 17:42:09.633967   79871 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:42:09.634119   79871 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:42:09.634169   79871 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:42:09.634328   79871 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-885666 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:42:09.634400   79871 kubeadm.go:310] [bootstrap-token] Using token: 17ct2j.hazurgskaspe26qx
	I0814 17:42:09.635732   79871 out.go:204]   - Configuring RBAC rules ...
	I0814 17:42:09.635859   79871 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:42:09.635990   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:42:09.636141   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:42:09.636250   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:42:09.636347   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:42:09.636485   79871 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:42:09.636657   79871 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:42:09.636708   79871 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:42:09.636747   79871 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:42:09.636753   79871 kubeadm.go:310] 
	I0814 17:42:09.636813   79871 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:42:09.636835   79871 kubeadm.go:310] 
	I0814 17:42:09.636972   79871 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:42:09.636995   79871 kubeadm.go:310] 
	I0814 17:42:09.637029   79871 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:42:09.637120   79871 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:42:09.637185   79871 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:42:09.637195   79871 kubeadm.go:310] 
	I0814 17:42:09.637267   79871 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:42:09.637277   79871 kubeadm.go:310] 
	I0814 17:42:09.637315   79871 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:42:09.637321   79871 kubeadm.go:310] 
	I0814 17:42:09.637384   79871 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:42:09.637461   79871 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:42:09.637523   79871 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:42:09.637529   79871 kubeadm.go:310] 
	I0814 17:42:09.637623   79871 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:42:09.637691   79871 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:42:09.637703   79871 kubeadm.go:310] 
	I0814 17:42:09.637779   79871 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 17ct2j.hazurgskaspe26qx \
	I0814 17:42:09.637866   79871 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:42:09.637890   79871 kubeadm.go:310] 	--control-plane 
	I0814 17:42:09.637899   79871 kubeadm.go:310] 
	I0814 17:42:09.638010   79871 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:42:09.638020   79871 kubeadm.go:310] 
	I0814 17:42:09.638098   79871 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 17ct2j.hazurgskaspe26qx \
	I0814 17:42:09.638211   79871 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:42:09.638234   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:42:09.638246   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:42:09.639748   79871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:42:09.641031   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:42:09.652173   79871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:42:09.670482   79871 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:42:09.670582   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:09.670582   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-885666 minikube.k8s.io/updated_at=2024_08_14T17_42_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=default-k8s-diff-port-885666 minikube.k8s.io/primary=true
	I0814 17:42:09.703097   79871 ops.go:34] apiserver oom_adj: -16
	I0814 17:42:09.881340   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:10.381470   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:07.516539   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:10.015456   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:10.882013   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:11.382239   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:11.881638   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:12.381703   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:12.881401   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:13.381524   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:13.881402   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:14.381696   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:14.498441   79871 kubeadm.go:1113] duration metric: took 4.827929439s to wait for elevateKubeSystemPrivileges
	I0814 17:42:14.498474   79871 kubeadm.go:394] duration metric: took 4m59.336328921s to StartCluster
	I0814 17:42:14.498493   79871 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:42:14.498581   79871 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:42:14.501029   79871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:42:14.501309   79871 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:42:14.501432   79871 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:42:14.501508   79871 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501541   79871 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.501550   79871 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:42:14.501577   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.501590   79871 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:42:14.501619   79871 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501667   79871 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.501677   79871 addons.go:243] addon metrics-server should already be in state true
	I0814 17:42:14.501716   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.501593   79871 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501840   79871 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-885666"
	I0814 17:42:14.502106   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502142   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502160   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502174   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502176   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502199   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502371   79871 out.go:177] * Verifying Kubernetes components...
	I0814 17:42:14.504085   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:42:14.519401   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0814 17:42:14.519631   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0814 17:42:14.520085   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.520196   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.520701   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.520722   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.520789   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0814 17:42:14.520978   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.520994   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.521255   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.521519   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.521524   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.521703   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.522021   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.522051   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.522548   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.522572   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.522864   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.523507   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.523550   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.525737   79871 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.525759   79871 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:42:14.525789   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.526144   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.526170   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.538930   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0814 17:42:14.538995   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42207
	I0814 17:42:14.539567   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.539594   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.540125   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.540138   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.540266   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.540289   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.540624   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.540770   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.540825   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.540970   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.542540   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.542848   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.544439   79871 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:42:14.544444   79871 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:42:14.544881   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0814 17:42:14.545315   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.545575   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:42:14.545586   79871 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:42:14.545601   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.545672   79871 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:42:14.545691   79871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:42:14.545708   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.545750   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.545759   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.546339   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.547167   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.547290   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.549794   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.549829   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550300   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.550324   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.550423   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550637   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.550707   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.550965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.551025   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.551119   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.551168   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.551302   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.551658   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.567203   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37661
	I0814 17:42:14.567613   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.568141   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.568167   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.568484   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.568678   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.570337   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.570867   79871 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:42:14.570888   79871 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:42:14.570906   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.574091   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.574562   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.574586   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.574667   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.574857   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.575039   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.575197   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.675594   79871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:42:14.694520   79871 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-885666" to be "Ready" ...
	I0814 17:42:14.702650   79871 node_ready.go:49] node "default-k8s-diff-port-885666" has status "Ready":"True"
	I0814 17:42:14.702672   79871 node_ready.go:38] duration metric: took 8.119351ms for node "default-k8s-diff-port-885666" to be "Ready" ...
	I0814 17:42:14.702684   79871 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:14.707535   79871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:14.762686   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:42:14.805275   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:42:14.837118   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:42:14.837143   79871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:42:14.881848   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:42:14.881872   79871 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:42:14.902731   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.902762   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.903058   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.903076   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.903092   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.903111   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.903448   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:14.903484   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.903493   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.908980   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.908995   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.909239   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:14.909310   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.909336   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.920224   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:42:14.920249   79871 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:42:14.955256   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:42:15.297167   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.297190   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.297544   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:15.297602   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.297631   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.297649   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.297659   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.297865   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.297885   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.842348   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.842376   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.842688   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.842707   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.842716   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.842738   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:15.842805   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.843057   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.843070   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.843081   79871 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-885666"
	I0814 17:42:15.844747   79871 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0814 17:42:12.513055   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:14.514298   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:15.845895   79871 addons.go:510] duration metric: took 1.344461878s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0814 17:42:16.714014   79871 pod_ready.go:102] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:18.715243   79871 pod_ready.go:102] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:17.013231   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:19.013966   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:20.507978   79367 pod_ready.go:81] duration metric: took 4m0.001138158s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" ...
	E0814 17:42:20.508026   79367 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 17:42:20.508048   79367 pod_ready.go:38] duration metric: took 4m6.305785273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:20.508081   79367 kubeadm.go:597] duration metric: took 4m13.455842043s to restartPrimaryControlPlane
	W0814 17:42:20.508145   79367 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:42:20.508186   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:42:20.714660   79871 pod_ready.go:92] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.714687   79871 pod_ready.go:81] duration metric: took 6.007129076s for pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.714696   79871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.719517   79871 pod_ready.go:92] pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.719542   79871 pod_ready.go:81] duration metric: took 4.838754ms for pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.719554   79871 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.724787   79871 pod_ready.go:92] pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.724816   79871 pod_ready.go:81] duration metric: took 5.250194ms for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.724834   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.731431   79871 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.731456   79871 pod_ready.go:81] duration metric: took 1.00661383s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.731468   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.736442   79871 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.736467   79871 pod_ready.go:81] duration metric: took 4.989787ms for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.736480   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-254cb" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.911495   79871 pod_ready.go:92] pod "kube-proxy-254cb" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.911520   79871 pod_ready.go:81] duration metric: took 175.03218ms for pod "kube-proxy-254cb" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.911529   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:22.311700   79871 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:22.311730   79871 pod_ready.go:81] duration metric: took 400.194781ms for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:22.311739   79871 pod_ready.go:38] duration metric: took 7.609043377s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:22.311752   79871 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:42:22.311800   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:42:22.326995   79871 api_server.go:72] duration metric: took 7.825649112s to wait for apiserver process to appear ...
	I0814 17:42:22.327018   79871 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:42:22.327036   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:42:22.331069   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 200:
	ok
	I0814 17:42:22.332077   79871 api_server.go:141] control plane version: v1.31.0
	I0814 17:42:22.332096   79871 api_server.go:131] duration metric: took 5.0724ms to wait for apiserver health ...
	I0814 17:42:22.332103   79871 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:42:22.514565   79871 system_pods.go:59] 9 kube-system pods found
	I0814 17:42:22.514595   79871 system_pods.go:61] "coredns-6f6b679f8f-k5qnj" [cf05f7e2-29de-4437-b182-53cd65350631] Running
	I0814 17:42:22.514601   79871 system_pods.go:61] "coredns-6f6b679f8f-nm28w" [ba1fe4d0-1869-49ec-a281-18119a2ad26b] Running
	I0814 17:42:22.514606   79871 system_pods.go:61] "etcd-default-k8s-diff-port-885666" [62581194-9ace-41f9-ba0d-0df04b7dca41] Running
	I0814 17:42:22.514610   79871 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-885666" [ea586a7b-5ca4-48d7-8be3-c13770e0cb40] Running
	I0814 17:42:22.514614   79871 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-885666" [9610bcca-feef-45f2-8b36-a6e96d364e18] Running
	I0814 17:42:22.514617   79871 system_pods.go:61] "kube-proxy-254cb" [e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb] Running
	I0814 17:42:22.514620   79871 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-885666" [872997ac-b438-4be5-b187-af171228770c] Running
	I0814 17:42:22.514626   79871 system_pods.go:61] "metrics-server-6867b74b74-5q86r" [849df692-9f8e-455e-b209-25801151513b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:42:22.514631   79871 system_pods.go:61] "storage-provisioner" [5128eea6-234c-4aea-a9b7-835e840a31a3] Running
	I0814 17:42:22.514639   79871 system_pods.go:74] duration metric: took 182.531543ms to wait for pod list to return data ...
	I0814 17:42:22.514647   79871 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:42:22.713101   79871 default_sa.go:45] found service account: "default"
	I0814 17:42:22.713125   79871 default_sa.go:55] duration metric: took 198.471814ms for default service account to be created ...
	I0814 17:42:22.713136   79871 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:42:22.914576   79871 system_pods.go:86] 9 kube-system pods found
	I0814 17:42:22.914619   79871 system_pods.go:89] "coredns-6f6b679f8f-k5qnj" [cf05f7e2-29de-4437-b182-53cd65350631] Running
	I0814 17:42:22.914628   79871 system_pods.go:89] "coredns-6f6b679f8f-nm28w" [ba1fe4d0-1869-49ec-a281-18119a2ad26b] Running
	I0814 17:42:22.914635   79871 system_pods.go:89] "etcd-default-k8s-diff-port-885666" [62581194-9ace-41f9-ba0d-0df04b7dca41] Running
	I0814 17:42:22.914643   79871 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-885666" [ea586a7b-5ca4-48d7-8be3-c13770e0cb40] Running
	I0814 17:42:22.914650   79871 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-885666" [9610bcca-feef-45f2-8b36-a6e96d364e18] Running
	I0814 17:42:22.914657   79871 system_pods.go:89] "kube-proxy-254cb" [e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb] Running
	I0814 17:42:22.914665   79871 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-885666" [872997ac-b438-4be5-b187-af171228770c] Running
	I0814 17:42:22.914678   79871 system_pods.go:89] "metrics-server-6867b74b74-5q86r" [849df692-9f8e-455e-b209-25801151513b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:42:22.914689   79871 system_pods.go:89] "storage-provisioner" [5128eea6-234c-4aea-a9b7-835e840a31a3] Running
	I0814 17:42:22.914705   79871 system_pods.go:126] duration metric: took 201.563199ms to wait for k8s-apps to be running ...
	I0814 17:42:22.914716   79871 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:42:22.914768   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:22.928499   79871 system_svc.go:56] duration metric: took 13.774119ms WaitForService to wait for kubelet
	I0814 17:42:22.928525   79871 kubeadm.go:582] duration metric: took 8.427183796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:42:22.928543   79871 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:42:23.112363   79871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:42:23.112398   79871 node_conditions.go:123] node cpu capacity is 2
	I0814 17:42:23.112410   79871 node_conditions.go:105] duration metric: took 183.861382ms to run NodePressure ...
	I0814 17:42:23.112423   79871 start.go:241] waiting for startup goroutines ...
	I0814 17:42:23.112432   79871 start.go:246] waiting for cluster config update ...
	I0814 17:42:23.112446   79871 start.go:255] writing updated cluster config ...
	I0814 17:42:23.112792   79871 ssh_runner.go:195] Run: rm -f paused
	I0814 17:42:23.162700   79871 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:42:23.164689   79871 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-885666" cluster and "default" namespace by default
	I0814 17:42:28.263217   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:42:28.263629   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:28.263853   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:33.264169   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:33.264403   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:43.264648   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:43.264858   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:46.859569   79367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.351355314s)
	I0814 17:42:46.859653   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:46.875530   79367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:42:46.884772   79367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:42:46.894185   79367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:42:46.894208   79367 kubeadm.go:157] found existing configuration files:
	
	I0814 17:42:46.894258   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:42:46.903690   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:42:46.903748   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:42:46.913277   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:42:46.922120   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:42:46.922173   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:42:46.931143   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:42:46.939936   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:42:46.939997   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:42:46.949257   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:42:46.958109   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:42:46.958169   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:42:46.967609   79367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:42:47.010119   79367 kubeadm.go:310] W0814 17:42:46.983769    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:47.010889   79367 kubeadm.go:310] W0814 17:42:46.984558    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:47.122746   79367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:42:55.571963   79367 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:42:55.572017   79367 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:42:55.572127   79367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:42:55.572236   79367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:42:55.572314   79367 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:42:55.572385   79367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:42:55.574178   79367 out.go:204]   - Generating certificates and keys ...
	I0814 17:42:55.574288   79367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:42:55.574372   79367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:42:55.574485   79367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:42:55.574573   79367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:42:55.574669   79367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:42:55.574740   79367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:42:55.574811   79367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:42:55.574909   79367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:42:55.575014   79367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:42:55.575135   79367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:42:55.575187   79367 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:42:55.575238   79367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:42:55.575288   79367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:42:55.575359   79367 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:42:55.575438   79367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:42:55.575521   79367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:42:55.575608   79367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:42:55.575759   79367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:42:55.575869   79367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:42:55.577331   79367 out.go:204]   - Booting up control plane ...
	I0814 17:42:55.577429   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:42:55.577511   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:42:55.577587   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:42:55.577773   79367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:42:55.577908   79367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:42:55.577968   79367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:42:55.578152   79367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:42:55.578301   79367 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:42:55.578368   79367 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.938552ms
	I0814 17:42:55.578428   79367 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:42:55.578480   79367 kubeadm.go:310] [api-check] The API server is healthy after 5.00239154s
	I0814 17:42:55.578605   79367 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:42:55.578777   79367 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:42:55.578863   79367 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:42:55.579025   79367 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-545149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:42:55.579100   79367 kubeadm.go:310] [bootstrap-token] Using token: qzd0yh.k8a8j7f6vmqndeav
	I0814 17:42:55.580318   79367 out.go:204]   - Configuring RBAC rules ...
	I0814 17:42:55.580429   79367 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:42:55.580503   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:42:55.580683   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:42:55.580839   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:42:55.580935   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:42:55.581047   79367 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:42:55.581197   79367 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:42:55.581235   79367 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:42:55.581279   79367 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:42:55.581285   79367 kubeadm.go:310] 
	I0814 17:42:55.581339   79367 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:42:55.581355   79367 kubeadm.go:310] 
	I0814 17:42:55.581470   79367 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:42:55.581480   79367 kubeadm.go:310] 
	I0814 17:42:55.581519   79367 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:42:55.581586   79367 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:42:55.581654   79367 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:42:55.581663   79367 kubeadm.go:310] 
	I0814 17:42:55.581749   79367 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:42:55.581757   79367 kubeadm.go:310] 
	I0814 17:42:55.581798   79367 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:42:55.581804   79367 kubeadm.go:310] 
	I0814 17:42:55.581857   79367 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:42:55.581944   79367 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:42:55.582007   79367 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:42:55.582014   79367 kubeadm.go:310] 
	I0814 17:42:55.582085   79367 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:42:55.582148   79367 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:42:55.582154   79367 kubeadm.go:310] 
	I0814 17:42:55.582221   79367 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qzd0yh.k8a8j7f6vmqndeav \
	I0814 17:42:55.582313   79367 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:42:55.582333   79367 kubeadm.go:310] 	--control-plane 
	I0814 17:42:55.582336   79367 kubeadm.go:310] 
	I0814 17:42:55.582426   79367 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:42:55.582434   79367 kubeadm.go:310] 
	I0814 17:42:55.582518   79367 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qzd0yh.k8a8j7f6vmqndeav \
	I0814 17:42:55.582678   79367 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:42:55.582691   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:42:55.582697   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:42:55.584337   79367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:42:55.585493   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:42:55.596201   79367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:42:55.617012   79367 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:42:55.617115   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:55.617152   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-545149 minikube.k8s.io/updated_at=2024_08_14T17_42_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=no-preload-545149 minikube.k8s.io/primary=true
	I0814 17:42:55.794262   79367 ops.go:34] apiserver oom_adj: -16
	I0814 17:42:55.794421   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:56.294450   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:56.795280   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:57.294604   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:57.794700   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:58.294863   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:58.795404   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:59.295066   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:59.794529   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:43:00.294720   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:43:00.409254   79367 kubeadm.go:1113] duration metric: took 4.79220609s to wait for elevateKubeSystemPrivileges
	I0814 17:43:00.409300   79367 kubeadm.go:394] duration metric: took 4m53.401266889s to StartCluster
	I0814 17:43:00.409323   79367 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:43:00.409419   79367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:43:00.411076   79367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:43:00.411313   79367 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:43:00.411438   79367 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:43:00.411521   79367 addons.go:69] Setting storage-provisioner=true in profile "no-preload-545149"
	I0814 17:43:00.411529   79367 addons.go:69] Setting default-storageclass=true in profile "no-preload-545149"
	I0814 17:43:00.411552   79367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-545149"
	I0814 17:43:00.411554   79367 addons.go:234] Setting addon storage-provisioner=true in "no-preload-545149"
	I0814 17:43:00.411564   79367 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:43:00.411568   79367 addons.go:69] Setting metrics-server=true in profile "no-preload-545149"
	W0814 17:43:00.411566   79367 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:43:00.411601   79367 addons.go:234] Setting addon metrics-server=true in "no-preload-545149"
	W0814 17:43:00.411612   79367 addons.go:243] addon metrics-server should already be in state true
	I0814 17:43:00.411637   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.411646   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.411922   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.411954   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412019   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.412053   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412076   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.412103   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412914   79367 out.go:177] * Verifying Kubernetes components...
	I0814 17:43:00.414471   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:43:00.427965   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0814 17:43:00.427966   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41043
	I0814 17:43:00.428460   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.428608   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.428985   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.429004   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.429130   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.429147   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.429206   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0814 17:43:00.429346   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.429443   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.429498   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.429543   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.430131   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.430152   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.430435   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.430446   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.430718   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.431238   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.431270   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.433273   79367 addons.go:234] Setting addon default-storageclass=true in "no-preload-545149"
	W0814 17:43:00.433292   79367 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:43:00.433319   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.433551   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.433581   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.450138   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I0814 17:43:00.450327   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I0814 17:43:00.450697   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.450818   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.451527   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.451547   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.451695   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.451706   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.451958   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.452224   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.452283   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.453110   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.453141   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.453937   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.455467   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I0814 17:43:00.455825   79367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:43:00.455934   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.456456   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.456479   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.456964   79367 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:43:00.456981   79367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:43:00.456999   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.457000   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.457144   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.459264   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.460208   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.460606   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.460636   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.460750   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.460858   79367 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:43:00.460989   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.461150   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.461281   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.462118   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:43:00.462138   79367 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:43:00.462156   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.465200   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.465643   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.465710   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.465829   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.466004   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.466165   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.466312   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.478054   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0814 17:43:00.478616   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.479176   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.479198   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.479536   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.479725   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.481351   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.481574   79367 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:43:00.481588   79367 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:43:00.481606   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.484454   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.484738   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.484771   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.484989   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.485222   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.485370   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.485485   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.617562   79367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:43:00.665134   79367 node_ready.go:35] waiting up to 6m0s for node "no-preload-545149" to be "Ready" ...
	I0814 17:43:00.673659   79367 node_ready.go:49] node "no-preload-545149" has status "Ready":"True"
	I0814 17:43:00.673680   79367 node_ready.go:38] duration metric: took 8.508683ms for node "no-preload-545149" to be "Ready" ...
	I0814 17:43:00.673689   79367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:43:00.680313   79367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:00.810401   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:43:00.827621   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:43:00.871727   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:43:00.871752   79367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:43:00.969061   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:43:00.969088   79367 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:43:01.103808   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:43:01.103839   79367 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:43:01.198160   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:43:01.880623   79367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.052957744s)
	I0814 17:43:01.880683   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.880697   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.880749   79367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070305568s)
	I0814 17:43:01.880785   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.880804   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881075   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881093   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881103   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.881115   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881248   79367 main.go:141] libmachine: (no-preload-545149) DBG | Closing plugin on server side
	I0814 17:43:01.881284   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881312   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881336   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.881375   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881385   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881396   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881682   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881703   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.896050   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.896076   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.896351   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.896370   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.131404   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:02.131427   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:02.131744   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:02.131768   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.131780   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:02.131788   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:02.132004   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:02.132026   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.132042   79367 addons.go:475] Verifying addon metrics-server=true in "no-preload-545149"
	I0814 17:43:02.133699   79367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 17:43:03.265508   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:03.265720   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:02.135365   79367 addons.go:510] duration metric: took 1.72392081s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 17:43:02.687160   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:05.186062   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:07.187193   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:09.188957   79367 pod_ready.go:92] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.188990   79367 pod_ready.go:81] duration metric: took 8.508650006s for pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.189003   79367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.194469   79367 pod_ready.go:92] pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.194492   79367 pod_ready.go:81] duration metric: took 5.48133ms for pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.194501   79367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.199127   79367 pod_ready.go:92] pod "etcd-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.199150   79367 pod_ready.go:81] duration metric: took 4.643296ms for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.199159   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.203804   79367 pod_ready.go:92] pod "kube-apiserver-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.203825   79367 pod_ready.go:81] duration metric: took 4.659513ms for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.203837   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.208443   79367 pod_ready.go:92] pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.208465   79367 pod_ready.go:81] duration metric: took 4.620634ms for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.208479   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6bps" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.584443   79367 pod_ready.go:92] pod "kube-proxy-s6bps" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.584471   79367 pod_ready.go:81] duration metric: took 375.985094ms for pod "kube-proxy-s6bps" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.584481   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.985476   79367 pod_ready.go:92] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.985504   79367 pod_ready.go:81] duration metric: took 401.014791ms for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.985515   79367 pod_ready.go:38] duration metric: took 9.311816641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:43:09.985534   79367 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:43:09.985603   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:43:10.002239   79367 api_server.go:72] duration metric: took 9.590875358s to wait for apiserver process to appear ...
	I0814 17:43:10.002276   79367 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:43:10.002304   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:43:10.009410   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I0814 17:43:10.010351   79367 api_server.go:141] control plane version: v1.31.0
	I0814 17:43:10.010381   79367 api_server.go:131] duration metric: took 8.098086ms to wait for apiserver health ...
	I0814 17:43:10.010389   79367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:43:10.189597   79367 system_pods.go:59] 9 kube-system pods found
	I0814 17:43:10.189629   79367 system_pods.go:61] "coredns-6f6b679f8f-h4dmc" [33f2fdca-15ba-430f-989f-3c569f33a76a] Running
	I0814 17:43:10.189634   79367 system_pods.go:61] "coredns-6f6b679f8f-mpfqf" [7b0e3bf4-41d9-4151-8255-37881e596c20] Running
	I0814 17:43:10.189638   79367 system_pods.go:61] "etcd-no-preload-545149" [5fc2782c-a4c3-46d6-b2d2-3c9325f17ae4] Running
	I0814 17:43:10.189642   79367 system_pods.go:61] "kube-apiserver-no-preload-545149" [3cdde3b9-70b4-4e5e-bc48-ab207c903437] Running
	I0814 17:43:10.189646   79367 system_pods.go:61] "kube-controller-manager-no-preload-545149" [c8f222c9-95a1-4acf-9ca3-068832ed808f] Running
	I0814 17:43:10.189649   79367 system_pods.go:61] "kube-proxy-s6bps" [9165c654-568f-4206-878c-f0c88ccd38cd] Running
	I0814 17:43:10.189652   79367 system_pods.go:61] "kube-scheduler-no-preload-545149" [423d82b6-cb92-408b-a5d6-95305c91400c] Running
	I0814 17:43:10.189658   79367 system_pods.go:61] "metrics-server-6867b74b74-7qljd" [0f0e5d07-eb28-46b3-9270-554006151eda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:43:10.189662   79367 system_pods.go:61] "storage-provisioner" [bc80ba99-eecf-4eb1-bd78-f88792cb3e5a] Running
	I0814 17:43:10.189670   79367 system_pods.go:74] duration metric: took 179.275641ms to wait for pod list to return data ...
	I0814 17:43:10.189678   79367 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:43:10.385690   79367 default_sa.go:45] found service account: "default"
	I0814 17:43:10.385715   79367 default_sa.go:55] duration metric: took 196.030333ms for default service account to be created ...
	I0814 17:43:10.385723   79367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:43:10.590237   79367 system_pods.go:86] 9 kube-system pods found
	I0814 17:43:10.590272   79367 system_pods.go:89] "coredns-6f6b679f8f-h4dmc" [33f2fdca-15ba-430f-989f-3c569f33a76a] Running
	I0814 17:43:10.590279   79367 system_pods.go:89] "coredns-6f6b679f8f-mpfqf" [7b0e3bf4-41d9-4151-8255-37881e596c20] Running
	I0814 17:43:10.590285   79367 system_pods.go:89] "etcd-no-preload-545149" [5fc2782c-a4c3-46d6-b2d2-3c9325f17ae4] Running
	I0814 17:43:10.590291   79367 system_pods.go:89] "kube-apiserver-no-preload-545149" [3cdde3b9-70b4-4e5e-bc48-ab207c903437] Running
	I0814 17:43:10.590299   79367 system_pods.go:89] "kube-controller-manager-no-preload-545149" [c8f222c9-95a1-4acf-9ca3-068832ed808f] Running
	I0814 17:43:10.590306   79367 system_pods.go:89] "kube-proxy-s6bps" [9165c654-568f-4206-878c-f0c88ccd38cd] Running
	I0814 17:43:10.590312   79367 system_pods.go:89] "kube-scheduler-no-preload-545149" [423d82b6-cb92-408b-a5d6-95305c91400c] Running
	I0814 17:43:10.590322   79367 system_pods.go:89] "metrics-server-6867b74b74-7qljd" [0f0e5d07-eb28-46b3-9270-554006151eda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:43:10.590335   79367 system_pods.go:89] "storage-provisioner" [bc80ba99-eecf-4eb1-bd78-f88792cb3e5a] Running
	I0814 17:43:10.590351   79367 system_pods.go:126] duration metric: took 204.620982ms to wait for k8s-apps to be running ...
	I0814 17:43:10.590364   79367 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:43:10.590418   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:10.605594   79367 system_svc.go:56] duration metric: took 15.223089ms WaitForService to wait for kubelet
	I0814 17:43:10.605624   79367 kubeadm.go:582] duration metric: took 10.194267533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:43:10.605644   79367 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:43:10.786127   79367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:43:10.786160   79367 node_conditions.go:123] node cpu capacity is 2
	I0814 17:43:10.786173   79367 node_conditions.go:105] duration metric: took 180.522994ms to run NodePressure ...
	I0814 17:43:10.786187   79367 start.go:241] waiting for startup goroutines ...
	I0814 17:43:10.786197   79367 start.go:246] waiting for cluster config update ...
	I0814 17:43:10.786210   79367 start.go:255] writing updated cluster config ...
	I0814 17:43:10.786498   79367 ssh_runner.go:195] Run: rm -f paused
	I0814 17:43:10.834139   79367 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:43:10.836315   79367 out.go:177] * Done! kubectl is now configured to use "no-preload-545149" cluster and "default" namespace by default
	I0814 17:43:43.267316   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:43.267596   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:43.267623   80228 kubeadm.go:310] 
	I0814 17:43:43.267680   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:43:43.267757   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:43:43.267778   80228 kubeadm.go:310] 
	I0814 17:43:43.267839   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:43:43.267894   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:43:43.268029   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:43:43.268044   80228 kubeadm.go:310] 
	I0814 17:43:43.268190   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:43:43.268247   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:43:43.268296   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:43:43.268305   80228 kubeadm.go:310] 
	I0814 17:43:43.268446   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:43:43.268561   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:43:43.268572   80228 kubeadm.go:310] 
	I0814 17:43:43.268741   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:43:43.268907   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:43:43.269021   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:43:43.269120   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:43:43.269131   80228 kubeadm.go:310] 
	I0814 17:43:43.269560   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:43:43.269642   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:43:43.269698   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 17:43:43.269809   80228 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 17:43:43.269853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:43:43.733975   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:43.748632   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:43:43.758474   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:43:43.758493   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:43:43.758543   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:43:43.767721   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:43:43.767777   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:43:43.777259   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:43:43.786562   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:43:43.786623   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:43:43.795253   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.803677   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:43:43.803733   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.812416   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:43:43.821020   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:43:43.821075   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:43:43.829709   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:43:44.024836   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:45:40.060126   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:45:40.060266   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 17:45:40.061931   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:45:40.062003   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:45:40.062110   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:45:40.062231   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:45:40.062360   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:45:40.062459   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:45:40.063940   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:45:40.064041   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:45:40.064124   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:45:40.064230   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:45:40.064305   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:45:40.064398   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:45:40.064471   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:45:40.064550   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:45:40.064632   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:45:40.064712   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:45:40.064798   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:45:40.064857   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:45:40.064913   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:45:40.064956   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:45:40.065040   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:45:40.065146   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:45:40.065222   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:45:40.065366   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:45:40.065490   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:45:40.065547   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:45:40.065648   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:45:40.068108   80228 out.go:204]   - Booting up control plane ...
	I0814 17:45:40.068211   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:45:40.068294   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:45:40.068395   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:45:40.068522   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:45:40.068675   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:45:40.068751   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:45:40.068843   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069048   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069141   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069393   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069510   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069756   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069823   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069982   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070051   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.070204   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070211   80228 kubeadm.go:310] 
	I0814 17:45:40.070244   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:45:40.070291   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:45:40.070299   80228 kubeadm.go:310] 
	I0814 17:45:40.070337   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:45:40.070379   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:45:40.070504   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:45:40.070521   80228 kubeadm.go:310] 
	I0814 17:45:40.070673   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:45:40.070723   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:45:40.070764   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:45:40.070776   80228 kubeadm.go:310] 
	I0814 17:45:40.070876   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:45:40.070945   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:45:40.070953   80228 kubeadm.go:310] 
	I0814 17:45:40.071045   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:45:40.071151   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:45:40.071246   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:45:40.071363   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:45:40.071453   80228 kubeadm.go:310] 
	I0814 17:45:40.071481   80228 kubeadm.go:394] duration metric: took 8m2.599023024s to StartCluster
	I0814 17:45:40.071554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:45:40.071617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:45:40.115691   80228 cri.go:89] found id: ""
	I0814 17:45:40.115719   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.115727   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:45:40.115734   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:45:40.115798   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:45:40.155537   80228 cri.go:89] found id: ""
	I0814 17:45:40.155566   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.155574   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:45:40.155580   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:45:40.155645   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:45:40.189570   80228 cri.go:89] found id: ""
	I0814 17:45:40.189604   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.189616   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:45:40.189625   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:45:40.189708   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:45:40.222496   80228 cri.go:89] found id: ""
	I0814 17:45:40.222521   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.222528   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:45:40.222533   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:45:40.222590   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:45:40.255095   80228 cri.go:89] found id: ""
	I0814 17:45:40.255129   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.255142   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:45:40.255151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:45:40.255233   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:45:40.290297   80228 cri.go:89] found id: ""
	I0814 17:45:40.290326   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.290341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:45:40.290348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:45:40.290396   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:45:40.326660   80228 cri.go:89] found id: ""
	I0814 17:45:40.326685   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.326695   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:45:40.326701   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:45:40.326764   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:45:40.359867   80228 cri.go:89] found id: ""
	I0814 17:45:40.359896   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.359908   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:45:40.359918   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:45:40.359933   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:45:40.397513   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:45:40.397543   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:45:40.451744   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:45:40.451778   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:45:40.475817   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:45:40.475843   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:45:40.575913   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:45:40.575933   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:45:40.575946   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0814 17:45:40.683455   80228 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 17:45:40.683509   80228 out.go:239] * 
	W0814 17:45:40.683587   80228 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.683623   80228 out.go:239] * 
	W0814 17:45:40.684431   80228 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 17:45:40.688043   80228 out.go:177] 
	W0814 17:45:40.689238   80228 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.689291   80228 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 17:45:40.689318   80228 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 17:45:40.690913   80228 out.go:177] 
	
	
	==> CRI-O <==
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.214633315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19334c2c-1606-4550-baf7-00a910ec3f4a name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.214824391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657052262629168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b98873700,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90b87828591b4c4edd21b3d179b225801cfadef171565630f1a4c8f99d09d,PodSandboxId:4b58f8b06e1f749b5e6a27770f77d7563e20563ad0cc471b67bf9a23a0f1a664,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723657032167672774,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 876cfcd4-be4c-422c-ad8f-ae89b22dd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03,PodSandboxId:ad3f0ae523e518364f6f622e4d020df4dfd1cea426663069205035ee58b36e59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657029063972969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kccp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db961449-4326-4700-a3e0-c11ab96df3ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052,PodSandboxId:44e239110b45273bc0be17f5aaf2671e4a5e326a971b2c9a8bb51af18f63fd8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657021522233967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8x9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ae0e0-8205-4854-8
2ba-0119b81efe2a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723657021434577976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b988737
00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c,PodSandboxId:052932072aaab2c6ff9bf917cf2a22c41d19c556251b965dbda2e082f75f2b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657016670697236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7d3f0a71a520824ed292b415206ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5,PodSandboxId:a7ac6ee82c686b17e2ce738219d93a766ecc163ca9b2f4544661248fe6dd90ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657016685526970,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e316ea113121d01cd33357150ae58e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535,PodSandboxId:1aeed98a248b5f70f1569fe266a3e9ce237d924d14b03dad43555518bf176277,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657016697439814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c60fab48b6bac6cf28be63793c0d8b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0,PodSandboxId:b00f8d6289491d6c22fdd416eacc08a9c61849e5a8f4cb98842428721eb3ee84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657016687583333,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b45f6e13fda13d3dc38c3cda0c2b93c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19334c2c-1606-4550-baf7-00a910ec3f4a name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.249324038Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fecd6b7a-0451-4e5f-97d6-7dc444b9fa09 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.249436604Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fecd6b7a-0451-4e5f-97d6-7dc444b9fa09 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.250318380Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a5d16d7-7da1-429a-a2c8-89cb8173bf96 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.250790849Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657830250765272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a5d16d7-7da1-429a-a2c8-89cb8173bf96 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.251535849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=458f4007-8b52-4b88-89f7-ccaf6fb73abc name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.251642649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=458f4007-8b52-4b88-89f7-ccaf6fb73abc name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.255668399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657052262629168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b98873700,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90b87828591b4c4edd21b3d179b225801cfadef171565630f1a4c8f99d09d,PodSandboxId:4b58f8b06e1f749b5e6a27770f77d7563e20563ad0cc471b67bf9a23a0f1a664,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723657032167672774,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 876cfcd4-be4c-422c-ad8f-ae89b22dd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03,PodSandboxId:ad3f0ae523e518364f6f622e4d020df4dfd1cea426663069205035ee58b36e59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657029063972969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kccp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db961449-4326-4700-a3e0-c11ab96df3ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052,PodSandboxId:44e239110b45273bc0be17f5aaf2671e4a5e326a971b2c9a8bb51af18f63fd8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657021522233967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8x9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ae0e0-8205-4854-8
2ba-0119b81efe2a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723657021434577976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b988737
00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c,PodSandboxId:052932072aaab2c6ff9bf917cf2a22c41d19c556251b965dbda2e082f75f2b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657016670697236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7d3f0a71a520824ed292b415206ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5,PodSandboxId:a7ac6ee82c686b17e2ce738219d93a766ecc163ca9b2f4544661248fe6dd90ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657016685526970,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e316ea113121d01cd33357150ae58e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535,PodSandboxId:1aeed98a248b5f70f1569fe266a3e9ce237d924d14b03dad43555518bf176277,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657016697439814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c60fab48b6bac6cf28be63793c0d8b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0,PodSandboxId:b00f8d6289491d6c22fdd416eacc08a9c61849e5a8f4cb98842428721eb3ee84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657016687583333,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b45f6e13fda13d3dc38c3cda0c2b93c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=458f4007-8b52-4b88-89f7-ccaf6fb73abc name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.294361511Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da1a174c-e487-4c60-9c40-4c9bdfa09561 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.294468617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da1a174c-e487-4c60-9c40-4c9bdfa09561 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.295644199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6330e77-3d35-41ed-8b9b-4edc60673ecb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.296034060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657830296009373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6330e77-3d35-41ed-8b9b-4edc60673ecb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.296507088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b144c7f6-817f-4932-bb06-7ee3af644f76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.296579880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b144c7f6-817f-4932-bb06-7ee3af644f76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.296764157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657052262629168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b98873700,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90b87828591b4c4edd21b3d179b225801cfadef171565630f1a4c8f99d09d,PodSandboxId:4b58f8b06e1f749b5e6a27770f77d7563e20563ad0cc471b67bf9a23a0f1a664,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723657032167672774,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 876cfcd4-be4c-422c-ad8f-ae89b22dd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03,PodSandboxId:ad3f0ae523e518364f6f622e4d020df4dfd1cea426663069205035ee58b36e59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657029063972969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kccp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db961449-4326-4700-a3e0-c11ab96df3ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052,PodSandboxId:44e239110b45273bc0be17f5aaf2671e4a5e326a971b2c9a8bb51af18f63fd8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657021522233967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8x9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ae0e0-8205-4854-8
2ba-0119b81efe2a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723657021434577976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b988737
00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c,PodSandboxId:052932072aaab2c6ff9bf917cf2a22c41d19c556251b965dbda2e082f75f2b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657016670697236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7d3f0a71a520824ed292b415206ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5,PodSandboxId:a7ac6ee82c686b17e2ce738219d93a766ecc163ca9b2f4544661248fe6dd90ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657016685526970,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e316ea113121d01cd33357150ae58e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535,PodSandboxId:1aeed98a248b5f70f1569fe266a3e9ce237d924d14b03dad43555518bf176277,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657016697439814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c60fab48b6bac6cf28be63793c0d8b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0,PodSandboxId:b00f8d6289491d6c22fdd416eacc08a9c61849e5a8f4cb98842428721eb3ee84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657016687583333,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b45f6e13fda13d3dc38c3cda0c2b93c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b144c7f6-817f-4932-bb06-7ee3af644f76 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.306600947Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=eb7e4111-e174-4512-ba8e-0e3e8e0a3d5a name=/runtime.v1.RuntimeService/Status
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.306674728Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=eb7e4111-e174-4512-ba8e-0e3e8e0a3d5a name=/runtime.v1.RuntimeService/Status
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.328954595Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2af8422-360a-4039-b46d-c47e2e8c6b95 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.329020185Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2af8422-360a-4039-b46d-c47e2e8c6b95 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.330265662Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f2740f8-e812-49eb-a34f-8f3e884f364c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.330878749Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657830330853981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f2740f8-e812-49eb-a34f-8f3e884f364c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.331489830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6695829e-cbe9-4ef8-b90c-06a0901717db name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.331624574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6695829e-cbe9-4ef8-b90c-06a0901717db name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:50:30 embed-certs-309673 crio[729]: time="2024-08-14 17:50:30.331889613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657052262629168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b98873700,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90b87828591b4c4edd21b3d179b225801cfadef171565630f1a4c8f99d09d,PodSandboxId:4b58f8b06e1f749b5e6a27770f77d7563e20563ad0cc471b67bf9a23a0f1a664,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723657032167672774,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 876cfcd4-be4c-422c-ad8f-ae89b22dd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03,PodSandboxId:ad3f0ae523e518364f6f622e4d020df4dfd1cea426663069205035ee58b36e59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657029063972969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kccp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db961449-4326-4700-a3e0-c11ab96df3ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052,PodSandboxId:44e239110b45273bc0be17f5aaf2671e4a5e326a971b2c9a8bb51af18f63fd8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657021522233967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8x9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ae0e0-8205-4854-8
2ba-0119b81efe2a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723657021434577976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b988737
00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c,PodSandboxId:052932072aaab2c6ff9bf917cf2a22c41d19c556251b965dbda2e082f75f2b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657016670697236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7d3f0a71a520824ed292b415206ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5,PodSandboxId:a7ac6ee82c686b17e2ce738219d93a766ecc163ca9b2f4544661248fe6dd90ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657016685526970,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e316ea113121d01cd33357150ae58e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535,PodSandboxId:1aeed98a248b5f70f1569fe266a3e9ce237d924d14b03dad43555518bf176277,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657016697439814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c60fab48b6bac6cf28be63793c0d8b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0,PodSandboxId:b00f8d6289491d6c22fdd416eacc08a9c61849e5a8f4cb98842428721eb3ee84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657016687583333,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b45f6e13fda13d3dc38c3cda0c2b93c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6695829e-cbe9-4ef8-b90c-06a0901717db name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1c13e2694057       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   27c056bb63e0e       storage-provisioner
	01c90b8782859       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   4b58f8b06e1f7       busybox
	0ac264c97809e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   ad3f0ae523e51       coredns-6f6b679f8f-kccp8
	4b094a20accac       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   44e239110b452       kube-proxy-z8x9t
	bdac981ff1f5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   27c056bb63e0e       storage-provisioner
	038cd12336322       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   1aeed98a248b5       kube-controller-manager-embed-certs-309673
	221f94a9fa6af       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   b00f8d6289491       kube-apiserver-embed-certs-309673
	e2594588a11a2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   a7ac6ee82c686       kube-scheduler-embed-certs-309673
	4b3a19329bb34       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   052932072aaab       etcd-embed-certs-309673
	
	
	==> coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40830 - 21315 "HINFO IN 5442161632545793277.7934525811174230808. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017461639s
	
	
	==> describe nodes <==
	Name:               embed-certs-309673
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-309673
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=embed-certs-309673
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T17_29_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 17:29:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-309673
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 17:50:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 17:47:42 +0000   Wed, 14 Aug 2024 17:29:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 17:47:42 +0000   Wed, 14 Aug 2024 17:29:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 17:47:42 +0000   Wed, 14 Aug 2024 17:29:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 17:47:42 +0000   Wed, 14 Aug 2024 17:37:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.2
	  Hostname:    embed-certs-309673
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6300c9e9736b454195de57a9af7b141a
	  System UUID:                6300c9e9-736b-4541-95de-57a9af7b141a
	  Boot ID:                    bc806884-d868-4a06-95a7-574ce4bb3d49
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-6f6b679f8f-kccp8                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-309673                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-embed-certs-309673             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-embed-certs-309673    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-z8x9t                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-309673             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-6867b74b74-jflvw               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-309673 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-309673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-309673 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node embed-certs-309673 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-309673 event: Registered Node embed-certs-309673 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-309673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-309673 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-309673 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-309673 event: Registered Node embed-certs-309673 in Controller
	
	
	==> dmesg <==
	[Aug14 17:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050667] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037725] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.708486] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.832735] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.337162] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.871977] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.064439] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049353] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.194362] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.131576] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.292316] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.036513] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +1.657314] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +0.062766] kauditd_printk_skb: 158 callbacks suppressed
	[Aug14 17:37] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.390624] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +3.328081] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.145848] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] <==
	{"level":"info","ts":"2024-08-14T17:36:59.062150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6af8d2579d5118d received MsgVoteResp from b6af8d2579d5118d at term 3"}
	{"level":"info","ts":"2024-08-14T17:36:59.062163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b6af8d2579d5118d became leader at term 3"}
	{"level":"info","ts":"2024-08-14T17:36:59.062174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b6af8d2579d5118d elected leader b6af8d2579d5118d at term 3"}
	{"level":"info","ts":"2024-08-14T17:36:59.064984Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b6af8d2579d5118d","local-member-attributes":"{Name:embed-certs-309673 ClientURLs:[https://192.168.61.2:2379]}","request-path":"/0/members/b6af8d2579d5118d/attributes","cluster-id":"4e4596f5647a61ec","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T17:36:59.065161Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T17:36:59.065234Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T17:36:59.065255Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T17:36:59.065272Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T17:36:59.066691Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:36:59.068029Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:36:59.068043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T17:36:59.069138Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.2:2379"}
	{"level":"warn","ts":"2024-08-14T17:37:17.176081Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.80244ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1264826851580686260 > lease_revoke:<id:118d9151f6d7ae25>","response":"size:28"}
	{"level":"info","ts":"2024-08-14T17:37:17.176178Z","caller":"traceutil/trace.go:171","msg":"trace[1594452547] linearizableReadLoop","detail":"{readStateIndex:617; appliedIndex:616; }","duration":"367.742359ms","start":"2024-08-14T17:37:16.808424Z","end":"2024-08-14T17:37:17.176166Z","steps":["trace[1594452547] 'read index received'  (duration: 142.716559ms)","trace[1594452547] 'applied index is now lower than readState.Index'  (duration: 225.0249ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T17:37:17.176295Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"367.848597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-309673\" ","response":"range_response_count:1 size:5478"}
	{"level":"info","ts":"2024-08-14T17:37:17.176310Z","caller":"traceutil/trace.go:171","msg":"trace[1339079020] range","detail":"{range_begin:/registry/minions/embed-certs-309673; range_end:; response_count:1; response_revision:581; }","duration":"367.884768ms","start":"2024-08-14T17:37:16.808420Z","end":"2024-08-14T17:37:17.176305Z","steps":["trace[1339079020] 'agreement among raft nodes before linearized reading'  (duration: 367.778713ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T17:37:17.176330Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T17:37:16.808354Z","time spent":"367.971599ms","remote":"127.0.0.1:48778","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5501,"request content":"key:\"/registry/minions/embed-certs-309673\" "}
	{"level":"warn","ts":"2024-08-14T17:37:37.268589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.78448ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1264826851580686436 > lease_revoke:<id:118d9151f6d7afe0>","response":"size:28"}
	{"level":"info","ts":"2024-08-14T17:37:37.268692Z","caller":"traceutil/trace.go:171","msg":"trace[1828047830] linearizableReadLoop","detail":"{readStateIndex:640; appliedIndex:639; }","duration":"336.087182ms","start":"2024-08-14T17:37:36.932593Z","end":"2024-08-14T17:37:37.268680Z","steps":["trace[1828047830] 'read index received'  (duration: 108.116144ms)","trace[1828047830] 'applied index is now lower than readState.Index'  (duration: 227.969364ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T17:37:37.268854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"336.249289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-jflvw\" ","response":"range_response_count:1 size:4382"}
	{"level":"info","ts":"2024-08-14T17:37:37.268874Z","caller":"traceutil/trace.go:171","msg":"trace[359000852] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-jflvw; range_end:; response_count:1; response_revision:600; }","duration":"336.278289ms","start":"2024-08-14T17:37:36.932589Z","end":"2024-08-14T17:37:37.268867Z","steps":["trace[359000852] 'agreement among raft nodes before linearized reading'  (duration: 336.171215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T17:37:37.268899Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T17:37:36.932557Z","time spent":"336.336755ms","remote":"127.0.0.1:48788","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4405,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-jflvw\" "}
	{"level":"info","ts":"2024-08-14T17:46:59.107108Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":825}
	{"level":"info","ts":"2024-08-14T17:46:59.116969Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":825,"took":"9.322581ms","hash":638957722,"current-db-size-bytes":2637824,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2637824,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-08-14T17:46:59.117097Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":638957722,"revision":825,"compact-revision":-1}
	
	
	==> kernel <==
	 17:50:30 up 13 min,  0 users,  load average: 0.05, 0.08, 0.08
	Linux embed-certs-309673 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] <==
	E0814 17:47:01.438659       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0814 17:47:01.438735       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 17:47:01.439820       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:47:01.439880       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:48:01.440756       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:48:01.440821       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0814 17:48:01.441034       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:48:01.441149       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 17:48:01.442080       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:48:01.443273       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:50:01.442490       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:50:01.442592       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0814 17:50:01.443669       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:50:01.443797       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:50:01.443907       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 17:50:01.445086       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] <==
	E0814 17:45:04.034589       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:45:04.472529       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:45:34.040752       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:45:34.481099       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:46:04.047115       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:46:04.489253       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:46:34.052903       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:46:34.496879       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:47:04.059354       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:47:04.504879       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:47:34.065988       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:47:34.513288       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:47:42.935334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-309673"
	I0814 17:48:04.046630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="868.83µs"
	E0814 17:48:04.072428       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:48:04.520778       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:48:15.044730       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="78.74µs"
	E0814 17:48:34.078836       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:48:34.527584       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:49:04.086646       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:49:04.535079       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:49:34.092726       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:49:34.542192       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:50:04.099242       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:50:04.551533       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 17:37:01.713984       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 17:37:01.727272       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.2"]
	E0814 17:37:01.727345       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 17:37:01.758802       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 17:37:01.758843       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 17:37:01.758873       1 server_linux.go:169] "Using iptables Proxier"
	I0814 17:37:01.761118       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 17:37:01.761400       1 server.go:483] "Version info" version="v1.31.0"
	I0814 17:37:01.761453       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:37:01.763287       1 config.go:197] "Starting service config controller"
	I0814 17:37:01.763322       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 17:37:01.763350       1 config.go:104] "Starting endpoint slice config controller"
	I0814 17:37:01.763450       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 17:37:01.764318       1 config.go:326] "Starting node config controller"
	I0814 17:37:01.764338       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 17:37:01.863610       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 17:37:01.863635       1 shared_informer.go:320] Caches are synced for service config
	I0814 17:37:01.865062       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] <==
	I0814 17:36:57.815171       1 serving.go:386] Generated self-signed cert in-memory
	W0814 17:37:00.349245       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0814 17:37:00.350435       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 17:37:00.350504       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 17:37:00.350530       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 17:37:00.429795       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0814 17:37:00.431416       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:37:00.442433       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0814 17:37:00.444510       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 17:37:00.445628       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 17:37:00.444531       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 17:37:00.546804       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 17:49:17 embed-certs-309673 kubelet[936]: E0814 17:49:17.031099     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jflvw" podUID="69a57151-6948-46ea-bacf-0915ea90fe44"
	Aug 14 17:49:25 embed-certs-309673 kubelet[936]: E0814 17:49:25.222444     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657765222099800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:49:25 embed-certs-309673 kubelet[936]: E0814 17:49:25.222725     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657765222099800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:49:30 embed-certs-309673 kubelet[936]: E0814 17:49:30.031316     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jflvw" podUID="69a57151-6948-46ea-bacf-0915ea90fe44"
	Aug 14 17:49:35 embed-certs-309673 kubelet[936]: E0814 17:49:35.223893     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657775223546386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:49:35 embed-certs-309673 kubelet[936]: E0814 17:49:35.224336     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657775223546386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:49:43 embed-certs-309673 kubelet[936]: E0814 17:49:43.030832     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jflvw" podUID="69a57151-6948-46ea-bacf-0915ea90fe44"
	Aug 14 17:49:45 embed-certs-309673 kubelet[936]: E0814 17:49:45.226727     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657785226146600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:49:45 embed-certs-309673 kubelet[936]: E0814 17:49:45.226805     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657785226146600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:49:55 embed-certs-309673 kubelet[936]: E0814 17:49:55.057350     936 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 17:49:55 embed-certs-309673 kubelet[936]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 17:49:55 embed-certs-309673 kubelet[936]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 17:49:55 embed-certs-309673 kubelet[936]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 17:49:55 embed-certs-309673 kubelet[936]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 17:49:55 embed-certs-309673 kubelet[936]: E0814 17:49:55.228712     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657795228241752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:49:55 embed-certs-309673 kubelet[936]: E0814 17:49:55.228825     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657795228241752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:49:57 embed-certs-309673 kubelet[936]: E0814 17:49:57.031057     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jflvw" podUID="69a57151-6948-46ea-bacf-0915ea90fe44"
	Aug 14 17:50:05 embed-certs-309673 kubelet[936]: E0814 17:50:05.231002     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657805230334529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:05 embed-certs-309673 kubelet[936]: E0814 17:50:05.231255     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657805230334529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:11 embed-certs-309673 kubelet[936]: E0814 17:50:11.030790     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jflvw" podUID="69a57151-6948-46ea-bacf-0915ea90fe44"
	Aug 14 17:50:15 embed-certs-309673 kubelet[936]: E0814 17:50:15.232636     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657815232325276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:15 embed-certs-309673 kubelet[936]: E0814 17:50:15.232687     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657815232325276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:25 embed-certs-309673 kubelet[936]: E0814 17:50:25.032809     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jflvw" podUID="69a57151-6948-46ea-bacf-0915ea90fe44"
	Aug 14 17:50:25 embed-certs-309673 kubelet[936]: E0814 17:50:25.234300     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657825233797157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:25 embed-certs-309673 kubelet[936]: E0814 17:50:25.234570     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657825233797157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] <==
	I0814 17:37:32.380741       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 17:37:32.396157       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 17:37:32.396651       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 17:37:49.795649       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 17:37:49.795842       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-309673_4af1e128-7cf2-4ab5-972d-f997e49c2728!
	I0814 17:37:49.800762       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"02efbc80-f5f3-44a2-acf2-74495f212cba", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-309673_4af1e128-7cf2-4ab5-972d-f997e49c2728 became leader
	I0814 17:37:49.896979       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-309673_4af1e128-7cf2-4ab5-972d-f997e49c2728!
	
	
	==> storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] <==
	I0814 17:37:01.581608       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0814 17:37:31.585836       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-309673 -n embed-certs-309673
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-309673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-jflvw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-309673 describe pod metrics-server-6867b74b74-jflvw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-309673 describe pod metrics-server-6867b74b74-jflvw: exit status 1 (65.227662ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-jflvw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-309673 describe pod metrics-server-6867b74b74-jflvw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0814 17:43:02.589050   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-14 17:51:23.684142113 +0000 UTC m=+6117.369424900
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-885666 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-885666 logs -n 25: (2.089508814s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-984053 sudo cat                              | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo find                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo crio                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-984053                                       | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-005029 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | disable-driver-mounts-005029                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:30 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-545149             | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-309673            | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-885666  | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC | 14 Aug 24 17:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC |                     |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-545149                  | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-505584        | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC | 14 Aug 24 17:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-309673                 | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-885666       | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:42 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-505584             | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 17:33:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 17:33:46.321266   80228 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:33:46.321519   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321529   80228 out.go:304] Setting ErrFile to fd 2...
	I0814 17:33:46.321533   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321691   80228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:33:46.322185   80228 out.go:298] Setting JSON to false
	I0814 17:33:46.323102   80228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8170,"bootTime":1723648656,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:33:46.323161   80228 start.go:139] virtualization: kvm guest
	I0814 17:33:46.325361   80228 out.go:177] * [old-k8s-version-505584] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:33:46.326668   80228 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:33:46.326679   80228 notify.go:220] Checking for updates...
	I0814 17:33:46.329217   80228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:33:46.330813   80228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:33:46.332019   80228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:33:46.333264   80228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:33:46.334480   80228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:33:46.336108   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:33:46.336521   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.336564   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.351154   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0814 17:33:46.351563   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.352042   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.352061   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.352395   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.352567   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.354248   80228 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 17:33:46.355547   80228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:33:46.355834   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.355865   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.370976   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
	I0814 17:33:46.371452   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.371977   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.372008   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.372376   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.372624   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.407797   80228 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 17:33:46.408905   80228 start.go:297] selected driver: kvm2
	I0814 17:33:46.408918   80228 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.409022   80228 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:33:46.409677   80228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.409753   80228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:33:46.424801   80228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:33:46.425288   80228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:33:46.425338   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:33:46.425349   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:33:46.425396   80228 start.go:340] cluster config:
	{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.425518   80228 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.427224   80228 out.go:177] * Starting "old-k8s-version-505584" primary control-plane node in "old-k8s-version-505584" cluster
	I0814 17:33:46.428485   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:33:46.428516   80228 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:33:46.428523   80228 cache.go:56] Caching tarball of preloaded images
	I0814 17:33:46.428589   80228 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:33:46.428600   80228 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 17:33:46.428727   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:33:46.428899   80228 start.go:360] acquireMachinesLock for old-k8s-version-505584: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:33:47.579625   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:50.651557   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:56.731587   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:59.803787   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:05.883582   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:08.959564   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:15.035593   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:18.107634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:24.187624   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:27.259634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:33.339631   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:36.411675   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:42.491633   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:45.563609   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:51.643582   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:54.715620   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:00.795564   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:03.867637   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:09.947634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:13.019646   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:19.099578   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:22.171640   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:28.251634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:31.323645   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:37.403627   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:40.475635   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:46.555591   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:49.627635   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:55.707632   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:58.779532   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:04.859619   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:07.931632   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:14.011612   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:17.083624   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:23.163638   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:26.235638   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:29.240279   79521 start.go:364] duration metric: took 4m23.88398072s to acquireMachinesLock for "embed-certs-309673"
	I0814 17:36:29.240341   79521 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:36:29.240351   79521 fix.go:54] fixHost starting: 
	I0814 17:36:29.240703   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:36:29.240730   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:36:29.255901   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0814 17:36:29.256372   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:36:29.256816   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:36:29.256839   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:36:29.257153   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:36:29.257337   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:29.257518   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:36:29.259382   79521 fix.go:112] recreateIfNeeded on embed-certs-309673: state=Stopped err=<nil>
	I0814 17:36:29.259419   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	W0814 17:36:29.259583   79521 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:36:29.261931   79521 out.go:177] * Restarting existing kvm2 VM for "embed-certs-309673" ...
	I0814 17:36:29.263301   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Start
	I0814 17:36:29.263487   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring networks are active...
	I0814 17:36:29.264251   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring network default is active
	I0814 17:36:29.264797   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring network mk-embed-certs-309673 is active
	I0814 17:36:29.265331   79521 main.go:141] libmachine: (embed-certs-309673) Getting domain xml...
	I0814 17:36:29.266055   79521 main.go:141] libmachine: (embed-certs-309673) Creating domain...
	I0814 17:36:29.237663   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:36:29.237704   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:36:29.238088   79367 buildroot.go:166] provisioning hostname "no-preload-545149"
	I0814 17:36:29.238131   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:36:29.238337   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:36:29.240159   79367 machine.go:97] duration metric: took 4m37.421920583s to provisionDockerMachine
	I0814 17:36:29.240195   79367 fix.go:56] duration metric: took 4m37.443181113s for fixHost
	I0814 17:36:29.240202   79367 start.go:83] releasing machines lock for "no-preload-545149", held for 4m37.443414836s
	W0814 17:36:29.240223   79367 start.go:714] error starting host: provision: host is not running
	W0814 17:36:29.240348   79367 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0814 17:36:29.240358   79367 start.go:729] Will try again in 5 seconds ...
	I0814 17:36:30.482377   79521 main.go:141] libmachine: (embed-certs-309673) Waiting to get IP...
	I0814 17:36:30.483405   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:30.483750   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:30.483837   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:30.483729   80776 retry.go:31] will retry after 224.900105ms: waiting for machine to come up
	I0814 17:36:30.710259   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:30.710718   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:30.710748   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:30.710679   80776 retry.go:31] will retry after 322.892012ms: waiting for machine to come up
	I0814 17:36:31.035358   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.035807   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.035835   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.035757   80776 retry.go:31] will retry after 374.226901ms: waiting for machine to come up
	I0814 17:36:31.411228   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.411783   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.411813   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.411717   80776 retry.go:31] will retry after 472.149905ms: waiting for machine to come up
	I0814 17:36:31.885265   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.885787   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.885810   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.885757   80776 retry.go:31] will retry after 676.063343ms: waiting for machine to come up
	I0814 17:36:32.563206   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:32.563711   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:32.563745   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:32.563658   80776 retry.go:31] will retry after 904.634039ms: waiting for machine to come up
	I0814 17:36:33.469832   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:33.470255   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:33.470278   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:33.470206   80776 retry.go:31] will retry after 1.132974911s: waiting for machine to come up
	I0814 17:36:34.605040   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:34.605542   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:34.605576   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:34.605498   80776 retry.go:31] will retry after 1.210457498s: waiting for machine to come up
	I0814 17:36:34.242590   79367 start.go:360] acquireMachinesLock for no-preload-545149: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:36:35.817809   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:35.818152   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:35.818177   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:35.818111   80776 retry.go:31] will retry after 1.275236618s: waiting for machine to come up
	I0814 17:36:37.095551   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:37.095975   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:37.096001   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:37.095937   80776 retry.go:31] will retry after 1.716925001s: waiting for machine to come up
	I0814 17:36:38.814927   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:38.815916   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:38.815943   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:38.815864   80776 retry.go:31] will retry after 2.040428036s: waiting for machine to come up
	I0814 17:36:40.858640   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:40.859157   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:40.859188   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:40.859108   80776 retry.go:31] will retry after 2.259949864s: waiting for machine to come up
	I0814 17:36:43.120436   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:43.120913   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:43.120939   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:43.120879   80776 retry.go:31] will retry after 3.64334808s: waiting for machine to come up
	I0814 17:36:47.975977   79871 start.go:364] duration metric: took 3m52.18367446s to acquireMachinesLock for "default-k8s-diff-port-885666"
	I0814 17:36:47.976049   79871 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:36:47.976064   79871 fix.go:54] fixHost starting: 
	I0814 17:36:47.976457   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:36:47.976492   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:36:47.993513   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0814 17:36:47.993940   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:36:47.994480   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:36:47.994504   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:36:47.994815   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:36:47.995005   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:36:47.995181   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:36:47.996716   79871 fix.go:112] recreateIfNeeded on default-k8s-diff-port-885666: state=Stopped err=<nil>
	I0814 17:36:47.996755   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	W0814 17:36:47.996923   79871 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:36:47.998967   79871 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-885666" ...
	I0814 17:36:46.766908   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.767458   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has current primary IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.767500   79521 main.go:141] libmachine: (embed-certs-309673) Found IP for machine: 192.168.61.2
	I0814 17:36:46.767516   79521 main.go:141] libmachine: (embed-certs-309673) Reserving static IP address...
	I0814 17:36:46.767974   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "embed-certs-309673", mac: "52:54:00:ed:61:4e", ip: "192.168.61.2"} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.767993   79521 main.go:141] libmachine: (embed-certs-309673) Reserved static IP address: 192.168.61.2
	I0814 17:36:46.768006   79521 main.go:141] libmachine: (embed-certs-309673) DBG | skip adding static IP to network mk-embed-certs-309673 - found existing host DHCP lease matching {name: "embed-certs-309673", mac: "52:54:00:ed:61:4e", ip: "192.168.61.2"}
	I0814 17:36:46.768017   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Getting to WaitForSSH function...
	I0814 17:36:46.768023   79521 main.go:141] libmachine: (embed-certs-309673) Waiting for SSH to be available...
	I0814 17:36:46.770187   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.770517   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.770548   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.770612   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Using SSH client type: external
	I0814 17:36:46.770643   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa (-rw-------)
	I0814 17:36:46.770672   79521 main.go:141] libmachine: (embed-certs-309673) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:36:46.770697   79521 main.go:141] libmachine: (embed-certs-309673) DBG | About to run SSH command:
	I0814 17:36:46.770703   79521 main.go:141] libmachine: (embed-certs-309673) DBG | exit 0
	I0814 17:36:46.895078   79521 main.go:141] libmachine: (embed-certs-309673) DBG | SSH cmd err, output: <nil>: 
	I0814 17:36:46.895444   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetConfigRaw
	I0814 17:36:46.896033   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:46.898715   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.899085   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.899117   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.899434   79521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/config.json ...
	I0814 17:36:46.899701   79521 machine.go:94] provisionDockerMachine start ...
	I0814 17:36:46.899723   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:46.899906   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:46.901985   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.902244   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.902268   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.902398   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:46.902564   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:46.902707   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:46.902829   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:46.902966   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:46.903201   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:46.903213   79521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:36:47.007289   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:36:47.007313   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.007589   79521 buildroot.go:166] provisioning hostname "embed-certs-309673"
	I0814 17:36:47.007608   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.007802   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.010311   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.010631   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.010670   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.010805   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.010956   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.011067   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.011160   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.011269   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.011455   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.011467   79521 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-309673 && echo "embed-certs-309673" | sudo tee /etc/hostname
	I0814 17:36:47.128575   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-309673
	
	I0814 17:36:47.128601   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.131125   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.131464   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.131493   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.131655   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.131970   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.132146   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.132286   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.132457   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.132614   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.132630   79521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-309673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-309673/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-309673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:36:47.247426   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:36:47.247469   79521 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:36:47.247486   79521 buildroot.go:174] setting up certificates
	I0814 17:36:47.247496   79521 provision.go:84] configureAuth start
	I0814 17:36:47.247506   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.247768   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:47.250616   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.250993   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.251018   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.251148   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.253149   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.253436   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.253465   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.253551   79521 provision.go:143] copyHostCerts
	I0814 17:36:47.253616   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:36:47.253628   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:36:47.253703   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:36:47.253817   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:36:47.253835   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:36:47.253875   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:36:47.253952   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:36:47.253962   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:36:47.253994   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:36:47.254060   79521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.embed-certs-309673 san=[127.0.0.1 192.168.61.2 embed-certs-309673 localhost minikube]
	I0814 17:36:47.338831   79521 provision.go:177] copyRemoteCerts
	I0814 17:36:47.338892   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:36:47.338921   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.341582   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.341897   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.341915   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.342053   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.342237   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.342374   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.342497   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.424777   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:36:47.446682   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:36:47.467672   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:36:47.488423   79521 provision.go:87] duration metric: took 240.914172ms to configureAuth
	I0814 17:36:47.488453   79521 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:36:47.488645   79521 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:36:47.488733   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.491453   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.491793   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.491816   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.492028   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.492216   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.492351   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.492479   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.492716   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.492909   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.492931   79521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:36:47.746210   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:36:47.746248   79521 machine.go:97] duration metric: took 846.530779ms to provisionDockerMachine
	I0814 17:36:47.746260   79521 start.go:293] postStartSetup for "embed-certs-309673" (driver="kvm2")
	I0814 17:36:47.746274   79521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:36:47.746297   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.746659   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:36:47.746694   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.749342   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.749674   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.749702   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.749831   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.750004   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.750126   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.750272   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.833279   79521 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:36:47.837076   79521 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:36:47.837099   79521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:36:47.837183   79521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:36:47.837269   79521 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:36:47.837387   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:36:47.845640   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:36:47.866978   79521 start.go:296] duration metric: took 120.70557ms for postStartSetup
	I0814 17:36:47.867012   79521 fix.go:56] duration metric: took 18.626661733s for fixHost
	I0814 17:36:47.867030   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.869687   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.870016   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.870046   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.870220   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.870399   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.870660   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.870827   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.870999   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.871209   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.871221   79521 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:36:47.975817   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657007.950271601
	
	I0814 17:36:47.975848   79521 fix.go:216] guest clock: 1723657007.950271601
	I0814 17:36:47.975860   79521 fix.go:229] Guest: 2024-08-14 17:36:47.950271601 +0000 UTC Remote: 2024-08-14 17:36:47.867016056 +0000 UTC m=+282.648397849 (delta=83.255545ms)
	I0814 17:36:47.975889   79521 fix.go:200] guest clock delta is within tolerance: 83.255545ms
	I0814 17:36:47.975896   79521 start.go:83] releasing machines lock for "embed-certs-309673", held for 18.735575335s
	I0814 17:36:47.975931   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.976213   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:47.978934   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.979457   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.979483   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.979625   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980134   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980303   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980382   79521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:36:47.980428   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.980574   79521 ssh_runner.go:195] Run: cat /version.json
	I0814 17:36:47.980603   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.983247   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983557   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983649   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.983687   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983828   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.984032   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.984042   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.984063   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.984183   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.984232   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.984320   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.984412   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.984467   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.984608   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:48.064891   79521 ssh_runner.go:195] Run: systemctl --version
	I0814 17:36:48.101403   79521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:36:48.239841   79521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:36:48.245634   79521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:36:48.245718   79521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:36:48.260517   79521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:36:48.260543   79521 start.go:495] detecting cgroup driver to use...
	I0814 17:36:48.260597   79521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:36:48.275003   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:36:48.290316   79521 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:36:48.290376   79521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:36:48.304351   79521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:36:48.320954   79521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:36:48.434176   79521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:36:48.582137   79521 docker.go:233] disabling docker service ...
	I0814 17:36:48.582217   79521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:36:48.595784   79521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:36:48.608379   79521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:36:48.735500   79521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:36:48.876194   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:36:48.891826   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:36:48.910820   79521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:36:48.910887   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.921125   79521 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:36:48.921198   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.931615   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.942779   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.953124   79521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:36:48.963454   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.974457   79521 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.991583   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:49.006059   79521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:36:49.015586   79521 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:36:49.015649   79521 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:36:49.028742   79521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:36:49.038126   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:36:49.155387   79521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:36:49.318598   79521 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:36:49.318679   79521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:36:49.323575   79521 start.go:563] Will wait 60s for crictl version
	I0814 17:36:49.323636   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:36:49.327233   79521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:36:49.369724   79521 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:36:49.369814   79521 ssh_runner.go:195] Run: crio --version
	I0814 17:36:49.399516   79521 ssh_runner.go:195] Run: crio --version
	I0814 17:36:49.431594   79521 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:36:49.432940   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:49.435776   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:49.436168   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:49.436199   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:49.436447   79521 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 17:36:49.440606   79521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:36:49.453159   79521 kubeadm.go:883] updating cluster {Name:embed-certs-309673 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:36:49.453272   79521 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:36:49.453311   79521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:36:49.486635   79521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:36:49.486708   79521 ssh_runner.go:195] Run: which lz4
	I0814 17:36:49.490626   79521 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:36:49.494822   79521 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:36:49.494852   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:36:48.000271   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Start
	I0814 17:36:48.000453   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring networks are active...
	I0814 17:36:48.001246   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring network default is active
	I0814 17:36:48.001621   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring network mk-default-k8s-diff-port-885666 is active
	I0814 17:36:48.002158   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Getting domain xml...
	I0814 17:36:48.002982   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Creating domain...
	I0814 17:36:49.272729   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting to get IP...
	I0814 17:36:49.273726   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.274182   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.274273   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.274157   80921 retry.go:31] will retry after 208.258845ms: waiting for machine to come up
	I0814 17:36:49.483781   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.484251   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.484278   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.484211   80921 retry.go:31] will retry after 318.193974ms: waiting for machine to come up
	I0814 17:36:49.803815   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.804311   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.804339   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.804277   80921 retry.go:31] will retry after 426.023242ms: waiting for machine to come up
	I0814 17:36:50.232060   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.232610   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.232646   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:50.232519   80921 retry.go:31] will retry after 534.392065ms: waiting for machine to come up
	I0814 17:36:50.745416   79521 crio.go:462] duration metric: took 1.254815826s to copy over tarball
	I0814 17:36:50.745515   79521 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:36:52.865848   79521 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.120299454s)
	I0814 17:36:52.865879   79521 crio.go:469] duration metric: took 2.120437156s to extract the tarball
	I0814 17:36:52.865887   79521 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:36:52.901808   79521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:36:52.946366   79521 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:36:52.946386   79521 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:36:52.946394   79521 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.31.0 crio true true} ...
	I0814 17:36:52.946492   79521 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-309673 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:36:52.946556   79521 ssh_runner.go:195] Run: crio config
	I0814 17:36:52.992520   79521 cni.go:84] Creating CNI manager for ""
	I0814 17:36:52.992541   79521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:36:52.992553   79521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:36:52.992577   79521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-309673 NodeName:embed-certs-309673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:36:52.992740   79521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-309673"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:36:52.992811   79521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:36:53.002460   79521 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:36:53.002539   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:36:53.011167   79521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 17:36:53.026436   79521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:36:53.041728   79521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0814 17:36:53.059102   79521 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0814 17:36:53.062728   79521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:36:53.073803   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:36:53.200870   79521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:36:53.217448   79521 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673 for IP: 192.168.61.2
	I0814 17:36:53.217472   79521 certs.go:194] generating shared ca certs ...
	I0814 17:36:53.217495   79521 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:36:53.217694   79521 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:36:53.217755   79521 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:36:53.217766   79521 certs.go:256] generating profile certs ...
	I0814 17:36:53.217876   79521 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/client.key
	I0814 17:36:53.217961   79521 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.key.83510bb8
	I0814 17:36:53.218034   79521 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.key
	I0814 17:36:53.218202   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:36:53.218248   79521 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:36:53.218272   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:36:53.218309   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:36:53.218343   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:36:53.218380   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:36:53.218447   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:36:53.219187   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:36:53.273437   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:36:53.307566   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:36:53.330107   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:36:53.360324   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0814 17:36:53.386974   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 17:36:53.409537   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:36:53.433873   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:36:53.456408   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:36:53.478233   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:36:53.500264   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:36:53.522440   79521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:36:53.538977   79521 ssh_runner.go:195] Run: openssl version
	I0814 17:36:53.544866   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:36:53.555085   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.559340   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.559399   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.565106   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:36:53.575561   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:36:53.585605   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.589838   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.589911   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.595165   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:36:53.604934   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:36:53.615153   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.619362   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.619435   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.624949   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:36:53.635459   79521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:36:53.639814   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:36:53.645419   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:36:53.651013   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:36:53.657004   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:36:53.662540   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:36:53.668187   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:36:53.673762   79521 kubeadm.go:392] StartCluster: {Name:embed-certs-309673 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:36:53.673867   79521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:36:53.673930   79521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:36:53.709404   79521 cri.go:89] found id: ""
	I0814 17:36:53.709490   79521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:36:53.719041   79521 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:36:53.719068   79521 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:36:53.719123   79521 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:36:53.728077   79521 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:36:53.729030   79521 kubeconfig.go:125] found "embed-certs-309673" server: "https://192.168.61.2:8443"
	I0814 17:36:53.730943   79521 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:36:53.739841   79521 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0814 17:36:53.739872   79521 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:36:53.739886   79521 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:36:53.739947   79521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:36:53.777400   79521 cri.go:89] found id: ""
	I0814 17:36:53.777476   79521 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:36:53.792838   79521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:36:53.802189   79521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:36:53.802223   79521 kubeadm.go:157] found existing configuration files:
	
	I0814 17:36:53.802278   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:36:53.813778   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:36:53.813854   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:36:53.825962   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:36:53.834929   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:36:53.834987   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:36:53.846315   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:36:53.855138   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:36:53.855206   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:36:53.864109   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:36:53.872613   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:36:53.872672   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:36:53.881307   79521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:36:53.890148   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.002103   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.664940   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.868608   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.932317   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:55.006430   79521 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:36:55.006523   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:50.768099   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.768599   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.768629   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:50.768554   80921 retry.go:31] will retry after 487.741283ms: waiting for machine to come up
	I0814 17:36:51.258499   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:51.259020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:51.259047   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:51.258975   80921 retry.go:31] will retry after 831.435484ms: waiting for machine to come up
	I0814 17:36:52.091900   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:52.092297   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:52.092351   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:52.092249   80921 retry.go:31] will retry after 1.067858402s: waiting for machine to come up
	I0814 17:36:53.161928   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:53.162393   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:53.162449   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:53.162366   80921 retry.go:31] will retry after 1.33971606s: waiting for machine to come up
	I0814 17:36:54.503810   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:54.504184   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:54.504214   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:54.504121   80921 retry.go:31] will retry after 1.4882184s: waiting for machine to come up
	I0814 17:36:55.506634   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:56.007367   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:56.507265   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:57.007343   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:57.026436   79521 api_server.go:72] duration metric: took 2.020005984s to wait for apiserver process to appear ...
	I0814 17:36:57.026471   79521 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:36:57.026496   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:36:55.994824   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:55.995255   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:55.995283   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:55.995206   80921 retry.go:31] will retry after 1.65461779s: waiting for machine to come up
	I0814 17:36:57.651449   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:57.651837   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:57.651867   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:57.651794   80921 retry.go:31] will retry after 2.38071296s: waiting for machine to come up
	I0814 17:37:00.033719   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:00.034261   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:37:00.034290   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:37:00.034204   80921 retry.go:31] will retry after 3.476533232s: waiting for machine to come up
	I0814 17:37:00.329636   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:00.329674   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:00.329689   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:00.357287   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:00.357334   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:00.527150   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:00.536020   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:00.536058   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:01.026558   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:01.034241   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:01.034271   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:01.526814   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:01.536226   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:01.536267   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:02.026791   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:02.031068   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 200:
	ok
	I0814 17:37:02.037240   79521 api_server.go:141] control plane version: v1.31.0
	I0814 17:37:02.037266   79521 api_server.go:131] duration metric: took 5.010786446s to wait for apiserver health ...
	I0814 17:37:02.037278   79521 cni.go:84] Creating CNI manager for ""
	I0814 17:37:02.037286   79521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:02.039248   79521 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:37:02.040543   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:37:02.050754   79521 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:37:02.067333   79521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:37:02.076082   79521 system_pods.go:59] 8 kube-system pods found
	I0814 17:37:02.076115   79521 system_pods.go:61] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:37:02.076130   79521 system_pods.go:61] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:37:02.076139   79521 system_pods.go:61] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:37:02.076178   79521 system_pods.go:61] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:37:02.076198   79521 system_pods.go:61] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:37:02.076218   79521 system_pods.go:61] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:37:02.076233   79521 system_pods.go:61] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:37:02.076242   79521 system_pods.go:61] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:37:02.076253   79521 system_pods.go:74] duration metric: took 8.901356ms to wait for pod list to return data ...
	I0814 17:37:02.076266   79521 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:37:02.080064   79521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:37:02.080087   79521 node_conditions.go:123] node cpu capacity is 2
	I0814 17:37:02.080101   79521 node_conditions.go:105] duration metric: took 3.829329ms to run NodePressure ...
	I0814 17:37:02.080121   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:02.359163   79521 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:37:02.368689   79521 kubeadm.go:739] kubelet initialised
	I0814 17:37:02.368717   79521 kubeadm.go:740] duration metric: took 9.524301ms waiting for restarted kubelet to initialise ...
	I0814 17:37:02.368728   79521 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:02.376056   79521 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.381317   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.381347   79521 pod_ready.go:81] duration metric: took 5.262062ms for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.381359   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.381370   79521 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.386799   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "etcd-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.386822   79521 pod_ready.go:81] duration metric: took 5.440585ms for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.386832   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "etcd-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.386838   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.392829   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.392853   79521 pod_ready.go:81] duration metric: took 6.003762ms for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.392864   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.392874   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.470943   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.470975   79521 pod_ready.go:81] duration metric: took 78.089715ms for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.470984   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.470996   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.870134   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-proxy-z8x9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.870163   79521 pod_ready.go:81] duration metric: took 399.157385ms for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.870175   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-proxy-z8x9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.870183   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:03.270805   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.270837   79521 pod_ready.go:81] duration metric: took 400.647029ms for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:03.270848   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.270856   79521 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:03.671023   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.671058   79521 pod_ready.go:81] duration metric: took 400.191147ms for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:03.671070   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.671079   79521 pod_ready.go:38] duration metric: took 1.302340033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:03.671098   79521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:37:03.683676   79521 ops.go:34] apiserver oom_adj: -16
	I0814 17:37:03.683701   79521 kubeadm.go:597] duration metric: took 9.964625256s to restartPrimaryControlPlane
	I0814 17:37:03.683712   79521 kubeadm.go:394] duration metric: took 10.009956133s to StartCluster
	I0814 17:37:03.683729   79521 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:03.683809   79521 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:03.685474   79521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:03.685708   79521 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:37:03.685766   79521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:37:03.685850   79521 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-309673"
	I0814 17:37:03.685862   79521 addons.go:69] Setting default-storageclass=true in profile "embed-certs-309673"
	I0814 17:37:03.685900   79521 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-309673"
	I0814 17:37:03.685907   79521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-309673"
	W0814 17:37:03.685911   79521 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:37:03.685933   79521 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:03.685933   79521 addons.go:69] Setting metrics-server=true in profile "embed-certs-309673"
	I0814 17:37:03.685988   79521 addons.go:234] Setting addon metrics-server=true in "embed-certs-309673"
	W0814 17:37:03.686006   79521 addons.go:243] addon metrics-server should already be in state true
	I0814 17:37:03.685945   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.686076   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.686284   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686362   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.686391   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686422   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.686482   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686538   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.687598   79521 out.go:177] * Verifying Kubernetes components...
	I0814 17:37:03.688995   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:03.701610   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I0814 17:37:03.702174   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.702789   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.702817   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.703223   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.703682   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.704077   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0814 17:37:03.704508   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.704864   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0814 17:37:03.705141   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.705154   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.705224   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.705473   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.705656   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.705670   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.706806   79521 addons.go:234] Setting addon default-storageclass=true in "embed-certs-309673"
	W0814 17:37:03.706824   79521 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:37:03.706851   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.707093   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.707112   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.707420   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.707536   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.707584   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.708017   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.708079   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.722383   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0814 17:37:03.722779   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.723288   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.723307   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.728799   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0814 17:37:03.728839   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38781
	I0814 17:37:03.728928   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.729426   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.729495   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.729776   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.729809   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.729951   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.729951   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.729967   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.729973   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.730360   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.730371   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.730698   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.730749   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.732979   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.733596   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.735250   79521 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:03.735262   79521 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:37:03.736576   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:37:03.736593   79521 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:37:03.736607   79521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:37:03.736612   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.736620   79521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:37:03.736637   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.740008   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740123   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740491   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.740558   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740676   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.740819   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.740842   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740872   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.740994   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.741120   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.741160   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.741527   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.741692   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.741817   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.749144   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0814 17:37:03.749482   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.749914   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.749929   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.750267   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.750467   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.752107   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.752325   79521 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:37:03.752339   79521 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:37:03.752360   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.754559   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.754845   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.754859   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.755073   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.755247   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.755402   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.755529   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.877535   79521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:03.897022   79521 node_ready.go:35] waiting up to 6m0s for node "embed-certs-309673" to be "Ready" ...
	I0814 17:37:03.951512   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:37:03.988066   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:37:03.988085   79521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:37:04.014925   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:37:04.025506   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:37:04.025531   79521 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:37:04.072457   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:37:04.072480   79521 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:37:04.104804   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:37:05.067867   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.116315804s)
	I0814 17:37:05.067888   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.052939793s)
	I0814 17:37:05.067925   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.067935   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068000   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068023   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068241   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068322   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068336   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068345   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068364   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068454   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068485   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068497   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068505   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068518   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068795   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068815   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068823   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068830   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068872   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068905   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.087716   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.087746   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.088086   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.088106   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.113388   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.008529856s)
	I0814 17:37:05.113441   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.113458   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.113736   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.113787   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.113800   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.113812   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.113823   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.114057   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.114071   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.114081   79521 addons.go:475] Verifying addon metrics-server=true in "embed-certs-309673"
	I0814 17:37:05.114163   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.116443   79521 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 17:37:05.118087   79521 addons.go:510] duration metric: took 1.432323959s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 17:37:03.512364   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:03.512842   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:37:03.512880   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:37:03.512785   80921 retry.go:31] will retry after 4.358649621s: waiting for machine to come up
	I0814 17:37:09.324026   80228 start.go:364] duration metric: took 3m22.895078586s to acquireMachinesLock for "old-k8s-version-505584"
	I0814 17:37:09.324085   80228 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:09.324101   80228 fix.go:54] fixHost starting: 
	I0814 17:37:09.324533   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:09.324575   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:09.344085   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
	I0814 17:37:09.344490   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:09.344980   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:37:09.345006   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:09.345416   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:09.345674   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:09.345842   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetState
	I0814 17:37:09.347489   80228 fix.go:112] recreateIfNeeded on old-k8s-version-505584: state=Stopped err=<nil>
	I0814 17:37:09.347511   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	W0814 17:37:09.347696   80228 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:09.349747   80228 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-505584" ...
	I0814 17:37:05.901013   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:07.901054   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:07.876377   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.876820   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has current primary IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.876845   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Found IP for machine: 192.168.50.184
	I0814 17:37:07.876857   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Reserving static IP address...
	I0814 17:37:07.877281   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-885666", mac: "52:54:00:f8:cc:3c", ip: "192.168.50.184"} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:07.877300   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Reserved static IP address: 192.168.50.184
	I0814 17:37:07.877320   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | skip adding static IP to network mk-default-k8s-diff-port-885666 - found existing host DHCP lease matching {name: "default-k8s-diff-port-885666", mac: "52:54:00:f8:cc:3c", ip: "192.168.50.184"}
	I0814 17:37:07.877339   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Getting to WaitForSSH function...
	I0814 17:37:07.877355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for SSH to be available...
	I0814 17:37:07.879843   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.880200   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:07.880242   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.880419   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Using SSH client type: external
	I0814 17:37:07.880445   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa (-rw-------)
	I0814 17:37:07.880496   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:07.880517   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | About to run SSH command:
	I0814 17:37:07.880549   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | exit 0
	I0814 17:37:08.007553   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:08.007929   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetConfigRaw
	I0814 17:37:08.009171   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:08.012358   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.012772   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.012804   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.013076   79871 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/config.json ...
	I0814 17:37:08.013284   79871 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:08.013310   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:08.013579   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.015965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.016325   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.016363   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.016491   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.016680   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.016873   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.017004   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.017140   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.017354   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.017376   79871 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:08.132369   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:08.132404   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.132657   79871 buildroot.go:166] provisioning hostname "default-k8s-diff-port-885666"
	I0814 17:37:08.132695   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.132906   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.136230   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.136669   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.136696   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.136937   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.137163   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.137350   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.137500   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.137672   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.137878   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.137900   79871 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-885666 && echo "default-k8s-diff-port-885666" | sudo tee /etc/hostname
	I0814 17:37:08.273593   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-885666
	
	I0814 17:37:08.273626   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.276470   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.276830   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.276862   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.277137   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.277382   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.277547   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.277713   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.277855   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.278052   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.278072   79871 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-885666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-885666/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-885666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:08.401522   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:08.401556   79871 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:08.401602   79871 buildroot.go:174] setting up certificates
	I0814 17:37:08.401626   79871 provision.go:84] configureAuth start
	I0814 17:37:08.401650   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.401963   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:08.404855   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.405251   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.405285   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.405521   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.407826   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.408338   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.408371   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.408515   79871 provision.go:143] copyHostCerts
	I0814 17:37:08.408583   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:08.408597   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:08.408681   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:08.408812   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:08.408823   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:08.408861   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:08.408947   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:08.408956   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:08.408984   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:08.409064   79871 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-885666 san=[127.0.0.1 192.168.50.184 default-k8s-diff-port-885666 localhost minikube]
	I0814 17:37:08.613459   79871 provision.go:177] copyRemoteCerts
	I0814 17:37:08.613530   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:08.613575   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.616704   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.617044   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.617072   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.617324   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.617515   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.617698   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.617844   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:08.705505   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:08.728835   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0814 17:37:08.751995   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:08.774577   79871 provision.go:87] duration metric: took 372.933752ms to configureAuth
	I0814 17:37:08.774609   79871 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:08.774812   79871 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:08.774880   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.777840   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.778235   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.778260   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.778527   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.778752   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.778899   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.779020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.779162   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.779437   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.779458   79871 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:09.055900   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:09.055927   79871 machine.go:97] duration metric: took 1.04262996s to provisionDockerMachine
	I0814 17:37:09.055943   79871 start.go:293] postStartSetup for "default-k8s-diff-port-885666" (driver="kvm2")
	I0814 17:37:09.055957   79871 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:09.055982   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.056325   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:09.056355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.059396   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.059853   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.059888   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.060064   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.060280   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.060558   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.060745   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.150649   79871 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:09.155263   79871 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:09.155295   79871 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:09.155400   79871 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:09.155500   79871 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:09.155623   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:09.167051   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:09.197223   79871 start.go:296] duration metric: took 141.264897ms for postStartSetup
	I0814 17:37:09.197324   79871 fix.go:56] duration metric: took 21.221265818s for fixHost
	I0814 17:37:09.197356   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.201388   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.201965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.202011   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.202109   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.202354   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.202569   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.202800   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.203010   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:09.203196   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:09.203209   79871 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:09.323868   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657029.302975780
	
	I0814 17:37:09.323892   79871 fix.go:216] guest clock: 1723657029.302975780
	I0814 17:37:09.323900   79871 fix.go:229] Guest: 2024-08-14 17:37:09.30297578 +0000 UTC Remote: 2024-08-14 17:37:09.197335302 +0000 UTC m=+253.546385360 (delta=105.640478ms)
	I0814 17:37:09.323918   79871 fix.go:200] guest clock delta is within tolerance: 105.640478ms
	I0814 17:37:09.323923   79871 start.go:83] releasing machines lock for "default-k8s-diff-port-885666", held for 21.347903434s
	I0814 17:37:09.323948   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.324209   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:09.327260   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.327802   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.327833   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.327993   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328500   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328727   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328814   79871 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:09.328862   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.328955   79871 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:09.328972   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.331813   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332081   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332233   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.332274   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332365   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.332490   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.332512   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332555   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.332669   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.332761   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.332824   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.332882   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.332926   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.333021   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.416041   79871 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:09.456024   79871 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:09.604623   79871 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:09.610562   79871 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:09.610624   79871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:09.627298   79871 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:09.627344   79871 start.go:495] detecting cgroup driver to use...
	I0814 17:37:09.627418   79871 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:09.648212   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:09.666047   79871 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:09.666107   79871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:09.681875   79871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:09.695920   79871 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:09.824502   79871 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:09.979561   79871 docker.go:233] disabling docker service ...
	I0814 17:37:09.979658   79871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:09.996877   79871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:10.014264   79871 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:10.166653   79871 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:10.288261   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:10.301868   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:10.320716   79871 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:37:10.320788   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.331099   79871 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:10.331158   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.342841   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.353762   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.364604   79871 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:10.376521   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.386787   79871 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.406713   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.418047   79871 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:10.428368   79871 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:10.428433   79871 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:10.442759   79871 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:10.452993   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:10.563097   79871 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:10.716953   79871 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:10.717031   79871 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:10.722685   79871 start.go:563] Will wait 60s for crictl version
	I0814 17:37:10.722759   79871 ssh_runner.go:195] Run: which crictl
	I0814 17:37:10.726621   79871 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:10.764534   79871 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:10.764628   79871 ssh_runner.go:195] Run: crio --version
	I0814 17:37:10.791513   79871 ssh_runner.go:195] Run: crio --version
	I0814 17:37:10.822380   79871 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:37:09.351136   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .Start
	I0814 17:37:09.351338   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring networks are active...
	I0814 17:37:09.352075   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network default is active
	I0814 17:37:09.352333   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network mk-old-k8s-version-505584 is active
	I0814 17:37:09.352701   80228 main.go:141] libmachine: (old-k8s-version-505584) Getting domain xml...
	I0814 17:37:09.353363   80228 main.go:141] libmachine: (old-k8s-version-505584) Creating domain...
	I0814 17:37:10.664390   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting to get IP...
	I0814 17:37:10.665484   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.665915   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.665980   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.665888   81116 retry.go:31] will retry after 285.047327ms: waiting for machine to come up
	I0814 17:37:10.952552   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.953009   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.953036   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.952973   81116 retry.go:31] will retry after 281.728141ms: waiting for machine to come up
	I0814 17:37:11.236576   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.237153   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.237192   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.237079   81116 retry.go:31] will retry after 341.673836ms: waiting for machine to come up
	I0814 17:37:10.401790   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:11.400713   79521 node_ready.go:49] node "embed-certs-309673" has status "Ready":"True"
	I0814 17:37:11.400742   79521 node_ready.go:38] duration metric: took 7.503686271s for node "embed-certs-309673" to be "Ready" ...
	I0814 17:37:11.400755   79521 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:11.408217   79521 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:11.414215   79521 pod_ready.go:92] pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:11.414244   79521 pod_ready.go:81] duration metric: took 5.997997ms for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:11.414256   79521 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:13.420804   79521 pod_ready.go:102] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:10.824020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:10.827965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:10.828426   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:10.828465   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:10.828807   79871 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:10.833261   79871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:10.846928   79871 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-885666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:10.847080   79871 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:37:10.847142   79871 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:10.889355   79871 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:37:10.889453   79871 ssh_runner.go:195] Run: which lz4
	I0814 17:37:10.894405   79871 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:37:10.898992   79871 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:10.899029   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:37:12.155402   79871 crio.go:462] duration metric: took 1.261016682s to copy over tarball
	I0814 17:37:12.155485   79871 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:14.344118   79871 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18859644s)
	I0814 17:37:14.344162   79871 crio.go:469] duration metric: took 2.188726026s to extract the tarball
	I0814 17:37:14.344173   79871 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:14.380317   79871 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:14.428289   79871 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:37:14.428312   79871 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:37:14.428326   79871 kubeadm.go:934] updating node { 192.168.50.184 8444 v1.31.0 crio true true} ...
	I0814 17:37:14.428422   79871 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-885666 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:14.428491   79871 ssh_runner.go:195] Run: crio config
	I0814 17:37:14.475385   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:37:14.475416   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:14.475433   79871 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:14.475464   79871 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-885666 NodeName:default-k8s-diff-port-885666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:37:14.475645   79871 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-885666"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:14.475712   79871 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:37:14.485148   79871 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:14.485206   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:14.494161   79871 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0814 17:37:14.511050   79871 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:14.526395   79871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0814 17:37:14.543061   79871 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:14.546747   79871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:14.558022   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:14.671818   79871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:14.688541   79871 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666 for IP: 192.168.50.184
	I0814 17:37:14.688583   79871 certs.go:194] generating shared ca certs ...
	I0814 17:37:14.688609   79871 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:14.688823   79871 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:14.688889   79871 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:14.688903   79871 certs.go:256] generating profile certs ...
	I0814 17:37:14.689020   79871 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/client.key
	I0814 17:37:14.689132   79871 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.key.690c84bc
	I0814 17:37:14.689182   79871 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.key
	I0814 17:37:14.689310   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:14.689367   79871 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:14.689385   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:14.689422   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:14.689453   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:14.689479   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:14.689524   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:14.690168   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:14.717906   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:14.759373   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:14.809775   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:14.834875   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 17:37:14.857860   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:14.886813   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:14.909803   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:14.935075   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:14.959759   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:14.985877   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:15.008456   79871 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:15.025602   79871 ssh_runner.go:195] Run: openssl version
	I0814 17:37:15.031392   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:15.041931   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.046475   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.046531   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.052377   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:15.063000   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:15.073463   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.078411   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.078471   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.083835   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:15.093753   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:15.103876   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.108487   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.108559   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.114104   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:15.124285   79871 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:15.128515   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:15.134223   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:15.139700   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:15.145537   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:15.151287   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:15.156766   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:15.162149   79871 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-885666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:15.162256   79871 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:15.162314   79871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:15.198745   79871 cri.go:89] found id: ""
	I0814 17:37:15.198814   79871 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:15.212198   79871 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:15.212216   79871 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:15.212256   79871 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:15.224275   79871 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:15.225218   79871 kubeconfig.go:125] found "default-k8s-diff-port-885666" server: "https://192.168.50.184:8444"
	I0814 17:37:15.227291   79871 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:15.237448   79871 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.184
	I0814 17:37:15.237494   79871 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:15.237509   79871 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:15.237563   79871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:15.281593   79871 cri.go:89] found id: ""
	I0814 17:37:15.281662   79871 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:15.298596   79871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:15.308702   79871 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:15.308723   79871 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:15.308779   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 17:37:15.318348   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:15.318409   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:15.330049   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 17:37:15.341283   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:15.341373   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:15.350584   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 17:37:15.361658   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:15.361718   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:15.373526   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 17:37:15.382360   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:15.382432   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:15.392477   79871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:15.402387   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:15.528954   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:11.580887   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.581466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.581500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.581392   81116 retry.go:31] will retry after 514.448726ms: waiting for machine to come up
	I0814 17:37:12.098137   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.098652   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.098740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.098642   81116 retry.go:31] will retry after 649.302617ms: waiting for machine to come up
	I0814 17:37:12.749349   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.749777   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.749803   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.749736   81116 retry.go:31] will retry after 897.486278ms: waiting for machine to come up
	I0814 17:37:13.649145   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:13.649666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:13.649698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:13.649621   81116 retry.go:31] will retry after 1.017213079s: waiting for machine to come up
	I0814 17:37:14.669187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:14.669715   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:14.669740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:14.669679   81116 retry.go:31] will retry after 1.014709613s: waiting for machine to come up
	I0814 17:37:15.685748   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:15.686269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:15.686299   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:15.686217   81116 retry.go:31] will retry after 1.476940798s: waiting for machine to come up
	I0814 17:37:15.422067   79521 pod_ready.go:102] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:16.421689   79521 pod_ready.go:92] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.421715   79521 pod_ready.go:81] duration metric: took 5.007451471s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.421724   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.426620   79521 pod_ready.go:92] pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.426644   79521 pod_ready.go:81] duration metric: took 4.912475ms for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.426657   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.430754   79521 pod_ready.go:92] pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.430776   79521 pod_ready.go:81] duration metric: took 4.110475ms for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.430787   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.434469   79521 pod_ready.go:92] pod "kube-proxy-z8x9t" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.434487   79521 pod_ready.go:81] duration metric: took 3.693253ms for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.434498   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.438294   79521 pod_ready.go:92] pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.438314   79521 pod_ready.go:81] duration metric: took 3.80298ms for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.438346   79521 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:18.445838   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:16.453075   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.676680   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.741803   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.831091   79871 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:16.831186   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:17.332193   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:17.831346   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:18.331620   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:18.832011   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:19.331528   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:19.348083   79871 api_server.go:72] duration metric: took 2.516990388s to wait for apiserver process to appear ...
	I0814 17:37:19.348119   79871 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:37:19.348144   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:17.164541   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:17.165093   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:17.165122   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:17.165017   81116 retry.go:31] will retry after 1.644726601s: waiting for machine to come up
	I0814 17:37:18.811628   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:18.812199   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:18.812224   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:18.812132   81116 retry.go:31] will retry after 2.740531885s: waiting for machine to come up
	I0814 17:37:21.576628   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:21.576657   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:21.576672   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:21.601355   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:21.601389   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:21.848481   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:21.855499   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:21.855530   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:22.349158   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:22.353345   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:22.353368   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:22.848954   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:22.853912   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 200:
	ok
	I0814 17:37:22.865096   79871 api_server.go:141] control plane version: v1.31.0
	I0814 17:37:22.865127   79871 api_server.go:131] duration metric: took 3.516999004s to wait for apiserver health ...
	I0814 17:37:22.865138   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:37:22.865146   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:22.866812   79871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:37:20.446123   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:22.446518   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:24.945729   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:22.867939   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:37:22.881586   79871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:37:22.899815   79871 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:37:22.910873   79871 system_pods.go:59] 8 kube-system pods found
	I0814 17:37:22.910928   79871 system_pods.go:61] "coredns-6f6b679f8f-mxc9v" [d1b9d422-faff-4709-a375-f8783e75e18c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:37:22.910946   79871 system_pods.go:61] "etcd-default-k8s-diff-port-885666" [a5473465-a1c1-4413-8e77-74fb1eb398a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:37:22.910956   79871 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-885666" [06c53e48-b156-42b1-b381-818f75821196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:37:22.910966   79871 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-885666" [18a2d7fb-4e18-4880-8812-63d25934699b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:37:22.910977   79871 system_pods.go:61] "kube-proxy-4rrff" [14453cc8-da7d-4dd4-b7fa-89a26dbbf23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:37:22.910995   79871 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-885666" [f0455f16-9a3e-4ede-8524-f701b1ab4ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:37:22.911005   79871 system_pods.go:61] "metrics-server-6867b74b74-qtzm8" [04c797ec-2e38-42a7-a023-5f60c451f780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:37:22.911020   79871 system_pods.go:61] "storage-provisioner" [88c2e8f0-0706-494a-8e83-0ede8f129040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:37:22.911032   79871 system_pods.go:74] duration metric: took 11.192968ms to wait for pod list to return data ...
	I0814 17:37:22.911044   79871 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:37:22.915096   79871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:37:22.915128   79871 node_conditions.go:123] node cpu capacity is 2
	I0814 17:37:22.915140   79871 node_conditions.go:105] duration metric: took 4.087198ms to run NodePressure ...
	I0814 17:37:22.915165   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:23.204612   79871 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:37:23.209643   79871 kubeadm.go:739] kubelet initialised
	I0814 17:37:23.209665   79871 kubeadm.go:740] duration metric: took 5.023123ms waiting for restarted kubelet to initialise ...
	I0814 17:37:23.209673   79871 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:23.215957   79871 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.221969   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.221993   79871 pod_ready.go:81] duration metric: took 6.011053ms for pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.222008   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.222014   79871 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.227119   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.227147   79871 pod_ready.go:81] duration metric: took 5.125006ms for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.227157   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.227163   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.231297   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.231321   79871 pod_ready.go:81] duration metric: took 4.149023ms for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.231346   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.231355   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:25.239956   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:21.555057   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:21.555530   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:21.555562   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:21.555484   81116 retry.go:31] will retry after 3.159225533s: waiting for machine to come up
	I0814 17:37:24.716173   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:24.716482   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:24.716507   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:24.716451   81116 retry.go:31] will retry after 3.32732131s: waiting for machine to come up
	I0814 17:37:29.512066   79367 start.go:364] duration metric: took 55.26941078s to acquireMachinesLock for "no-preload-545149"
	I0814 17:37:29.512115   79367 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:29.512123   79367 fix.go:54] fixHost starting: 
	I0814 17:37:29.512539   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:29.512574   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:29.529625   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0814 17:37:29.530074   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:29.530558   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:37:29.530585   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:29.530930   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:29.531149   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:29.531291   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:37:29.532999   79367 fix.go:112] recreateIfNeeded on no-preload-545149: state=Stopped err=<nil>
	I0814 17:37:29.533049   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	W0814 17:37:29.533224   79367 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:29.535000   79367 out.go:177] * Restarting existing kvm2 VM for "no-preload-545149" ...
	I0814 17:37:27.445398   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:29.945246   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:27.737698   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:29.737890   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:28.045690   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046151   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has current primary IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046177   80228 main.go:141] libmachine: (old-k8s-version-505584) Found IP for machine: 192.168.72.49
	I0814 17:37:28.046192   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserving static IP address...
	I0814 17:37:28.046500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.046524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | skip adding static IP to network mk-old-k8s-version-505584 - found existing host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"}
	I0814 17:37:28.046540   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserved static IP address: 192.168.72.49
	I0814 17:37:28.046559   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting for SSH to be available...
	I0814 17:37:28.046571   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Getting to WaitForSSH function...
	I0814 17:37:28.048709   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049082   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.049106   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049252   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH client type: external
	I0814 17:37:28.049285   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa (-rw-------)
	I0814 17:37:28.049325   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:28.049342   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | About to run SSH command:
	I0814 17:37:28.049356   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | exit 0
	I0814 17:37:28.179844   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:28.180193   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetConfigRaw
	I0814 17:37:28.180865   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.183617   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184074   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.184118   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184367   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:37:28.184641   80228 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:28.184663   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:28.184891   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.187158   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187517   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.187547   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187696   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.187857   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188027   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188178   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.188320   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.188570   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.188587   80228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:28.303564   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:28.303597   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.303831   80228 buildroot.go:166] provisioning hostname "old-k8s-version-505584"
	I0814 17:37:28.303856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.304021   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.306826   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307180   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.307210   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307415   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.307608   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307769   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307915   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.308131   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.308336   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.308354   80228 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-505584 && echo "old-k8s-version-505584" | sudo tee /etc/hostname
	I0814 17:37:28.434224   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-505584
	
	I0814 17:37:28.434261   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.437350   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437633   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.437666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.438077   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438245   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438395   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.438623   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.438832   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.438857   80228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-505584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-505584/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-505584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:28.564784   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:28.564815   80228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:28.564858   80228 buildroot.go:174] setting up certificates
	I0814 17:37:28.564872   80228 provision.go:84] configureAuth start
	I0814 17:37:28.564884   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.565188   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.568217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.568698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.568731   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.569013   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.571364   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571780   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.571805   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571961   80228 provision.go:143] copyHostCerts
	I0814 17:37:28.572023   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:28.572032   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:28.572076   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:28.572176   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:28.572184   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:28.572206   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:28.572275   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:28.572284   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:28.572337   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:28.572435   80228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-505584 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-505584]
	I0814 17:37:28.804798   80228 provision.go:177] copyRemoteCerts
	I0814 17:37:28.804853   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:28.804879   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.807967   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.808302   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808458   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.808690   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.808874   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.809001   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:28.900346   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:28.926959   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 17:37:28.955373   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:28.984436   80228 provision.go:87] duration metric: took 419.552519ms to configureAuth
	I0814 17:37:28.984463   80228 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:28.984630   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:37:28.984713   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.987602   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988077   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.988107   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988237   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.988486   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988641   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988768   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.988986   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.989209   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.989234   80228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:29.262630   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:29.262656   80228 machine.go:97] duration metric: took 1.078000469s to provisionDockerMachine
	I0814 17:37:29.262669   80228 start.go:293] postStartSetup for "old-k8s-version-505584" (driver="kvm2")
	I0814 17:37:29.262683   80228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:29.262704   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.263051   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:29.263082   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.266020   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.266495   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266720   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.266919   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.267093   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.267253   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.354027   80228 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:29.358196   80228 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:29.358224   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:29.358304   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:29.358416   80228 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:29.358543   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:29.367802   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:29.392802   80228 start.go:296] duration metric: took 130.117007ms for postStartSetup
	I0814 17:37:29.392846   80228 fix.go:56] duration metric: took 20.068754346s for fixHost
	I0814 17:37:29.392871   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.395638   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396032   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.396064   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396251   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.396516   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396698   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396893   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.397155   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:29.397326   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:29.397340   80228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:29.511889   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657049.468340520
	
	I0814 17:37:29.511913   80228 fix.go:216] guest clock: 1723657049.468340520
	I0814 17:37:29.511923   80228 fix.go:229] Guest: 2024-08-14 17:37:29.46834052 +0000 UTC Remote: 2024-08-14 17:37:29.392851248 +0000 UTC m=+223.104093144 (delta=75.489272ms)
	I0814 17:37:29.511983   80228 fix.go:200] guest clock delta is within tolerance: 75.489272ms
	I0814 17:37:29.511996   80228 start.go:83] releasing machines lock for "old-k8s-version-505584", held for 20.187937886s
	I0814 17:37:29.512031   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.512333   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:29.515152   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515487   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.515524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515735   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516299   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516497   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516643   80228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:29.516723   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.516727   80228 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:29.516752   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.519600   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.519751   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520017   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520045   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520164   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520192   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520341   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520423   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520520   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520588   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520646   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520718   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.520780   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.642824   80228 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:29.648834   80228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:29.795482   80228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:29.801407   80228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:29.801486   80228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:29.821662   80228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:29.821684   80228 start.go:495] detecting cgroup driver to use...
	I0814 17:37:29.821761   80228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:29.843829   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:29.859505   80228 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:29.859590   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:29.873790   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:29.889295   80228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:30.035909   80228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:30.209521   80228 docker.go:233] disabling docker service ...
	I0814 17:37:30.209574   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:30.226980   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:30.241678   80228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:30.375116   80228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:30.498357   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:30.512272   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:30.533062   80228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 17:37:30.533130   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.543595   80228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:30.543664   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.554139   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.564417   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.574627   80228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:30.584957   80228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:30.594667   80228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:30.594720   80228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:30.606826   80228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:30.621990   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:30.758992   80228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:30.915494   80228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:30.915572   80228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:30.920692   80228 start.go:563] Will wait 60s for crictl version
	I0814 17:37:30.920767   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:30.924365   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:30.964662   80228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:30.964756   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:30.995534   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:31.025400   80228 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 17:37:31.026943   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:31.030217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030630   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:31.030665   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030943   80228 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:31.034960   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:31.047742   80228 kubeadm.go:883] updating cluster {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:31.047864   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:37:31.047926   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:31.092203   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:31.092278   80228 ssh_runner.go:195] Run: which lz4
	I0814 17:37:31.096471   80228 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:37:31.100610   80228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:31.100642   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 17:37:29.536310   79367 main.go:141] libmachine: (no-preload-545149) Calling .Start
	I0814 17:37:29.536513   79367 main.go:141] libmachine: (no-preload-545149) Ensuring networks are active...
	I0814 17:37:29.537431   79367 main.go:141] libmachine: (no-preload-545149) Ensuring network default is active
	I0814 17:37:29.537935   79367 main.go:141] libmachine: (no-preload-545149) Ensuring network mk-no-preload-545149 is active
	I0814 17:37:29.538468   79367 main.go:141] libmachine: (no-preload-545149) Getting domain xml...
	I0814 17:37:29.539383   79367 main.go:141] libmachine: (no-preload-545149) Creating domain...
	I0814 17:37:30.863155   79367 main.go:141] libmachine: (no-preload-545149) Waiting to get IP...
	I0814 17:37:30.864257   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:30.864722   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:30.864812   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:30.864695   81248 retry.go:31] will retry after 244.342973ms: waiting for machine to come up
	I0814 17:37:31.111211   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.111784   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.111806   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.111735   81248 retry.go:31] will retry after 277.033145ms: waiting for machine to come up
	I0814 17:37:31.390071   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.390511   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.390554   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.390429   81248 retry.go:31] will retry after 320.225451ms: waiting for machine to come up
	I0814 17:37:31.949069   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:34.445833   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:31.741110   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:33.239418   79871 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:33.239449   79871 pod_ready.go:81] duration metric: took 10.008084028s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.239462   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4rrff" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.244600   79871 pod_ready.go:92] pod "kube-proxy-4rrff" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:33.244628   79871 pod_ready.go:81] duration metric: took 5.157296ms for pod "kube-proxy-4rrff" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.244648   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:35.253613   79871 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:35.253643   79871 pod_ready.go:81] duration metric: took 2.008985731s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:35.253657   79871 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:32.582064   80228 crio.go:462] duration metric: took 1.485645107s to copy over tarball
	I0814 17:37:32.582151   80228 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:35.556765   80228 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.974581109s)
	I0814 17:37:35.556795   80228 crio.go:469] duration metric: took 2.9747s to extract the tarball
	I0814 17:37:35.556802   80228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:35.599129   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:35.632752   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:35.632775   80228 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:35.632831   80228 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.632864   80228 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.632892   80228 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 17:37:35.632911   80228 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.632944   80228 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.633112   80228 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634793   80228 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.634821   80228 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 17:37:35.634824   80228 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634885   80228 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.634910   80228 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.635009   80228 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.635082   80228 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.635265   80228 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.905566   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 17:37:35.953168   80228 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 17:37:35.953210   80228 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 17:37:35.953260   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:35.957961   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:35.978859   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.978920   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.988556   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.993281   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.997933   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.018501   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.043527   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.146739   80228 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 17:37:36.146812   80228 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 17:37:36.146832   80228 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.146852   80228 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.146881   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.146891   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163832   80228 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 17:37:36.163856   80228 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 17:37:36.163877   80228 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.163889   80228 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.163923   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163924   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163927   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.172482   80228 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 17:37:36.172530   80228 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.172599   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.195157   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.195208   80228 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 17:37:36.195165   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.195242   80228 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.195245   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.195277   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.237454   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 17:37:36.237519   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.237549   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.292614   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.306771   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.306794   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:31.712067   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.712601   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.712630   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.712575   81248 retry.go:31] will retry after 546.687472ms: waiting for machine to come up
	I0814 17:37:32.261457   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:32.261921   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:32.261950   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:32.261854   81248 retry.go:31] will retry after 484.345236ms: waiting for machine to come up
	I0814 17:37:32.747475   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:32.748118   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:32.748149   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:32.748060   81248 retry.go:31] will retry after 899.564198ms: waiting for machine to come up
	I0814 17:37:33.649684   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:33.650206   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:33.650234   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:33.650155   81248 retry.go:31] will retry after 1.039934932s: waiting for machine to come up
	I0814 17:37:34.691741   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:34.692197   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:34.692220   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:34.692169   81248 retry.go:31] will retry after 925.402437ms: waiting for machine to come up
	I0814 17:37:35.618737   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:35.619169   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:35.619200   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:35.619102   81248 retry.go:31] will retry after 1.401066913s: waiting for machine to come up
	I0814 17:37:36.447039   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:38.945321   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:37.260912   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:39.759967   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:36.321893   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.339836   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.339929   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.426588   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.426659   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.433149   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.469717   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:36.477512   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.477583   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.477761   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.538635   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 17:37:36.557712   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 17:37:36.558304   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.700263   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 17:37:36.700333   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 17:37:36.700410   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 17:37:36.700481   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 17:37:36.700527   80228 cache_images.go:92] duration metric: took 1.067740607s to LoadCachedImages
	W0814 17:37:36.700602   80228 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 17:37:36.700623   80228 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0814 17:37:36.700757   80228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-505584 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:36.700846   80228 ssh_runner.go:195] Run: crio config
	I0814 17:37:36.748814   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:37:36.748843   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:36.748860   80228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:36.748885   80228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-505584 NodeName:old-k8s-version-505584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 17:37:36.749053   80228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-505584"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:36.749129   80228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 17:37:36.760058   80228 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:36.760131   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:36.769388   80228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0814 17:37:36.786594   80228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:36.807695   80228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0814 17:37:36.825609   80228 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:36.829296   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:36.841882   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:36.976199   80228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:36.993682   80228 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584 for IP: 192.168.72.49
	I0814 17:37:36.993707   80228 certs.go:194] generating shared ca certs ...
	I0814 17:37:36.993728   80228 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:36.993924   80228 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:36.993985   80228 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:36.993998   80228 certs.go:256] generating profile certs ...
	I0814 17:37:36.994115   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.key
	I0814 17:37:36.994206   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key.c375770f
	I0814 17:37:36.994261   80228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key
	I0814 17:37:36.994428   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:36.994478   80228 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:36.994492   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:36.994522   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:36.994557   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:36.994603   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:36.994661   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:36.995558   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:37.043910   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:37.073810   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:37.097939   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:37.124449   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 17:37:37.154747   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:37.179474   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:37.204471   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:37.228579   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:37.266929   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:37.292912   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:37.316803   80228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:37.332934   80228 ssh_runner.go:195] Run: openssl version
	I0814 17:37:37.339316   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:37.349829   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354230   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354297   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.360089   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:37.371417   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:37.381777   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385894   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385955   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.391826   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:37.402049   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:37.412038   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416395   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416448   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.421794   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:37.431868   80228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:37.436305   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:37.442838   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:37.448991   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:37.454769   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:37.460381   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:37.466406   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:37.472466   80228 kubeadm.go:392] StartCluster: {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:37.472584   80228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:37.472636   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.508256   80228 cri.go:89] found id: ""
	I0814 17:37:37.508323   80228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:37.518824   80228 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:37.518856   80228 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:37.518941   80228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:37.529328   80228 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:37.530242   80228 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-505584" does not appear in /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:37.530890   80228 kubeconfig.go:62] /home/jenkins/minikube-integration/19446-13977/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-505584" cluster setting kubeconfig missing "old-k8s-version-505584" context setting]
	I0814 17:37:37.531922   80228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:37.539843   80228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:37.550012   80228 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.49
	I0814 17:37:37.550051   80228 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:37.550063   80228 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:37.550113   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.590226   80228 cri.go:89] found id: ""
	I0814 17:37:37.590307   80228 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:37.606242   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:37.615340   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:37.615377   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:37.615436   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:37:37.623996   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:37.624063   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:37.633356   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:37:37.642888   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:37.642958   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:37.652532   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.661607   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:37.661679   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.670876   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:37:37.679780   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:37.679846   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:37.690044   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:37.699617   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:37.813799   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.666487   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.901307   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.029983   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.139056   80228 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:39.139133   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:39.639191   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.139315   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.639292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:41.139421   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:37.021766   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:37.022253   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:37.022282   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:37.022216   81248 retry.go:31] will retry after 2.184222941s: waiting for machine to come up
	I0814 17:37:39.209777   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:39.210239   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:39.210265   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:39.210203   81248 retry.go:31] will retry after 2.903962154s: waiting for machine to come up
	I0814 17:37:41.445413   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:43.949816   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:41.760985   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:44.260273   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:41.639312   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.139387   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.639981   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.139499   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.639391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.139425   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.639677   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.139466   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.639426   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:46.140065   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.116682   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:42.117116   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:42.117154   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:42.117086   81248 retry.go:31] will retry after 3.387467992s: waiting for machine to come up
	I0814 17:37:45.505680   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:45.506034   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:45.506056   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:45.505986   81248 retry.go:31] will retry after 2.944973353s: waiting for machine to come up
	I0814 17:37:46.443772   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:48.445046   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:46.759297   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:49.260881   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:46.640043   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.139213   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.639848   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.140080   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.639961   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.139473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.639212   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.139781   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.640028   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.140140   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.452516   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.453064   79367 main.go:141] libmachine: (no-preload-545149) Found IP for machine: 192.168.39.162
	I0814 17:37:48.453092   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has current primary IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.453099   79367 main.go:141] libmachine: (no-preload-545149) Reserving static IP address...
	I0814 17:37:48.453513   79367 main.go:141] libmachine: (no-preload-545149) Reserved static IP address: 192.168.39.162
	I0814 17:37:48.453564   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "no-preload-545149", mac: "52:54:00:d0:bd:d7", ip: "192.168.39.162"} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.453578   79367 main.go:141] libmachine: (no-preload-545149) Waiting for SSH to be available...
	I0814 17:37:48.453608   79367 main.go:141] libmachine: (no-preload-545149) DBG | skip adding static IP to network mk-no-preload-545149 - found existing host DHCP lease matching {name: "no-preload-545149", mac: "52:54:00:d0:bd:d7", ip: "192.168.39.162"}
	I0814 17:37:48.453630   79367 main.go:141] libmachine: (no-preload-545149) DBG | Getting to WaitForSSH function...
	I0814 17:37:48.455959   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.456279   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.456304   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.456429   79367 main.go:141] libmachine: (no-preload-545149) DBG | Using SSH client type: external
	I0814 17:37:48.456449   79367 main.go:141] libmachine: (no-preload-545149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa (-rw-------)
	I0814 17:37:48.456490   79367 main.go:141] libmachine: (no-preload-545149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:48.456506   79367 main.go:141] libmachine: (no-preload-545149) DBG | About to run SSH command:
	I0814 17:37:48.456520   79367 main.go:141] libmachine: (no-preload-545149) DBG | exit 0
	I0814 17:37:48.579489   79367 main.go:141] libmachine: (no-preload-545149) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:48.579924   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetConfigRaw
	I0814 17:37:48.580615   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:48.583202   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.583545   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.583592   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.583857   79367 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/config.json ...
	I0814 17:37:48.584093   79367 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:48.584113   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:48.584340   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.586712   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.587086   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.587107   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.587259   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.587441   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.587593   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.587720   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.587869   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.588029   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.588040   79367 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:48.691255   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:48.691290   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.691555   79367 buildroot.go:166] provisioning hostname "no-preload-545149"
	I0814 17:37:48.691593   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.691798   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.694492   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.694768   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.694797   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.694907   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.695084   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.695248   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.695396   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.695556   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.695777   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.695798   79367 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-545149 && echo "no-preload-545149" | sudo tee /etc/hostname
	I0814 17:37:48.813509   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-545149
	
	I0814 17:37:48.813537   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.816304   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.816698   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.816732   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.816884   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.817057   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.817265   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.817393   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.817586   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.817809   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.817836   79367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-545149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-545149/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-545149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:48.927482   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:48.927512   79367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:48.927540   79367 buildroot.go:174] setting up certificates
	I0814 17:37:48.927551   79367 provision.go:84] configureAuth start
	I0814 17:37:48.927567   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.927831   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:48.930532   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.930879   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.930906   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.931104   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.933420   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.933754   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.933783   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.933893   79367 provision.go:143] copyHostCerts
	I0814 17:37:48.933968   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:48.933979   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:48.934040   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:48.934146   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:48.934156   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:48.934186   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:48.934262   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:48.934271   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:48.934302   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:48.934377   79367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.no-preload-545149 san=[127.0.0.1 192.168.39.162 localhost minikube no-preload-545149]
	I0814 17:37:49.287517   79367 provision.go:177] copyRemoteCerts
	I0814 17:37:49.287580   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:49.287607   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.290280   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.290667   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.290690   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.290856   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.291063   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.291180   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.291304   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.374716   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:49.398652   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:37:49.422885   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:37:49.448774   79367 provision.go:87] duration metric: took 521.207251ms to configureAuth
	I0814 17:37:49.448800   79367 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:49.448972   79367 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:49.449064   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.452034   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.452373   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.452403   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.452604   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.452859   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.453058   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.453217   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.453388   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:49.453579   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:49.453601   79367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:49.711896   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:49.711922   79367 machine.go:97] duration metric: took 1.127817152s to provisionDockerMachine
	I0814 17:37:49.711933   79367 start.go:293] postStartSetup for "no-preload-545149" (driver="kvm2")
	I0814 17:37:49.711942   79367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:49.711977   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.712299   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:49.712324   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.714736   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.715059   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.715097   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.715232   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.715428   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.715616   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.715769   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.797746   79367 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:49.801764   79367 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:49.801794   79367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:49.801863   79367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:49.801960   79367 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:49.802081   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:49.811506   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:49.834762   79367 start.go:296] duration metric: took 122.81358ms for postStartSetup
	I0814 17:37:49.834812   79367 fix.go:56] duration metric: took 20.32268926s for fixHost
	I0814 17:37:49.834837   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.837418   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.837739   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.837768   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.837903   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.838114   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.838292   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.838438   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.838643   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:49.838838   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:49.838850   79367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:49.944936   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657069.919883473
	
	I0814 17:37:49.944965   79367 fix.go:216] guest clock: 1723657069.919883473
	I0814 17:37:49.944975   79367 fix.go:229] Guest: 2024-08-14 17:37:49.919883473 +0000 UTC Remote: 2024-08-14 17:37:49.834818813 +0000 UTC m=+358.184638535 (delta=85.06466ms)
	I0814 17:37:49.945005   79367 fix.go:200] guest clock delta is within tolerance: 85.06466ms
	I0814 17:37:49.945017   79367 start.go:83] releasing machines lock for "no-preload-545149", held for 20.432923283s
	I0814 17:37:49.945044   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.945291   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:49.947847   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.948269   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.948295   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.948500   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949082   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949262   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949347   79367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:49.949406   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.949517   79367 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:49.949541   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.952281   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952328   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952667   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.952692   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952833   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.952836   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.952895   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.953037   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.953075   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.953201   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.953212   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.953400   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.953412   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.953543   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:50.072094   79367 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:50.080210   79367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:50.227736   79367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:50.233533   79367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:50.233597   79367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:50.249452   79367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:50.249474   79367 start.go:495] detecting cgroup driver to use...
	I0814 17:37:50.249552   79367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:50.265740   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:50.278769   79367 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:50.278833   79367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:50.291625   79367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:50.304529   79367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:50.415405   79367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:50.556016   79367 docker.go:233] disabling docker service ...
	I0814 17:37:50.556092   79367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:50.570197   79367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:50.583068   79367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:50.721414   79367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:50.850890   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:50.864530   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:50.882021   79367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:37:50.882097   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.891490   79367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:50.891564   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.901437   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.911316   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.920935   79367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:50.930571   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.940106   79367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.957351   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.967222   79367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:50.976120   79367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:50.976170   79367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:50.990922   79367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:51.000086   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:51.116655   79367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:51.246182   79367 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:51.246265   79367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:51.250838   79367 start.go:563] Will wait 60s for crictl version
	I0814 17:37:51.250900   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.254633   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:51.299890   79367 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:51.299992   79367 ssh_runner.go:195] Run: crio --version
	I0814 17:37:51.328292   79367 ssh_runner.go:195] Run: crio --version
	I0814 17:37:51.360415   79367 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:37:51.361536   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:51.364443   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:51.364884   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:51.364914   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:51.365112   79367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:51.368941   79367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:51.380519   79367 kubeadm.go:883] updating cluster {Name:no-preload-545149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:51.380668   79367 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:37:51.380705   79367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:51.413314   79367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:37:51.413346   79367 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:51.413417   79367 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.413435   79367 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.413452   79367 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.413395   79367 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:51.413473   79367 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 17:37:51.413440   79367 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.413521   79367 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.413529   79367 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.414920   79367 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:51.414940   79367 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 17:37:51.414983   79367 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.415006   79367 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.415010   79367 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.414982   79367 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.415070   79367 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.415100   79367 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.664642   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.686463   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:50.445457   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:52.945915   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:51.762809   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:54.259593   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:51.639969   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.139918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.639403   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.139921   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.640224   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.140272   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.639242   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.139908   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.639233   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:56.139955   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.699627   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0814 17:37:51.718031   79367 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0814 17:37:51.718085   79367 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.718133   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.736370   79367 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0814 17:37:51.736408   79367 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.736454   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.779229   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.800986   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.819343   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.841240   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.853614   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.853650   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.853753   79367 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0814 17:37:51.853798   79367 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.853842   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.866717   79367 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0814 17:37:51.866757   79367 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.866807   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.908593   79367 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0814 17:37:51.908644   79367 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.908701   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.936701   79367 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0814 17:37:51.936737   79367 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.936784   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.944882   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.944962   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.944983   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.945008   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.945070   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.945089   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.063281   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:52.080543   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:52.080556   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.080574   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:52.080629   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:52.080647   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:52.126573   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:52.205600   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:52.205623   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.236617   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 17:37:52.236678   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:52.236757   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.237083   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 17:37:52.237161   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:52.238804   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0814 17:37:52.238891   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:52.294945   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 17:37:52.295018   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 17:37:52.295064   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:37:52.295103   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0814 17:37:52.295127   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.295189   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.295110   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:37:52.302365   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 17:37:52.302388   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0814 17:37:52.302423   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0814 17:37:52.302472   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:37:52.306933   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0814 17:37:52.307107   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0814 17:37:52.309298   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:54.271998   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.976780716s)
	I0814 17:37:54.272032   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0814 17:37:54.272053   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:54.272063   79367 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.962736886s)
	I0814 17:37:54.272100   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:54.271998   79367 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (1.969503874s)
	I0814 17:37:54.272150   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0814 17:37:54.272105   79367 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0814 17:37:54.272192   79367 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:54.272250   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:56.021236   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.749108117s)
	I0814 17:37:56.021281   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0814 17:37:56.021288   79367 ssh_runner.go:235] Completed: which crictl: (1.749013682s)
	I0814 17:37:56.021309   79367 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:56.021370   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:56.021386   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:55.445017   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:57.445204   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:59.945329   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:56.260666   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:58.760907   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:56.639799   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.140184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.639918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.139310   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.639393   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.140139   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.639614   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.139472   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.139946   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.830150   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.808753337s)
	I0814 17:37:59.830181   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0814 17:37:59.830205   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:37:59.830208   79367 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.80880721s)
	I0814 17:37:59.830253   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:59.830255   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:38:02.444320   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:04.444667   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:01.260951   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:03.759895   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:01.639422   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.139858   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.639412   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.140047   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.640170   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.139779   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.639728   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.139343   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.640249   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.139448   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.796675   79367 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.966400982s)
	I0814 17:38:01.796690   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.966414051s)
	I0814 17:38:01.796708   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0814 17:38:01.796735   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:38:01.796757   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:38:01.796796   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:38:01.841898   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 17:38:01.841994   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:03.571965   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.775142217s)
	I0814 17:38:03.571991   79367 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.729967853s)
	I0814 17:38:03.572002   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0814 17:38:03.572019   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0814 17:38:03.572028   79367 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:03.572079   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:04.422670   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 17:38:04.422705   79367 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:38:04.422760   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:38:06.277419   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.854632861s)
	I0814 17:38:06.277457   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0814 17:38:06.277488   79367 cache_images.go:123] Successfully loaded all cached images
	I0814 17:38:06.277494   79367 cache_images.go:92] duration metric: took 14.864134758s to LoadCachedImages
	I0814 17:38:06.277504   79367 kubeadm.go:934] updating node { 192.168.39.162 8443 v1.31.0 crio true true} ...
	I0814 17:38:06.277628   79367 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-545149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:38:06.277690   79367 ssh_runner.go:195] Run: crio config
	I0814 17:38:06.337971   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:38:06.337990   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:38:06.337999   79367 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:38:06.338019   79367 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-545149 NodeName:no-preload-545149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:38:06.338148   79367 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-545149"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:38:06.338222   79367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:38:06.348156   79367 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:38:06.348219   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:38:06.356784   79367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0814 17:38:06.372439   79367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:38:06.388610   79367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0814 17:38:06.405084   79367 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I0814 17:38:06.408753   79367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:38:06.420313   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:38:06.546115   79367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:38:06.563747   79367 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149 for IP: 192.168.39.162
	I0814 17:38:06.563776   79367 certs.go:194] generating shared ca certs ...
	I0814 17:38:06.563799   79367 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:38:06.563973   79367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:38:06.564035   79367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:38:06.564058   79367 certs.go:256] generating profile certs ...
	I0814 17:38:06.564150   79367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/client.key
	I0814 17:38:06.564207   79367 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.key.d0704694
	I0814 17:38:06.564241   79367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.key
	I0814 17:38:06.564349   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:38:06.564377   79367 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:38:06.564386   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:38:06.564411   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:38:06.564437   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:38:06.564459   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:38:06.564497   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:38:06.565269   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:38:06.592622   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:38:06.619148   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:38:06.646169   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:38:06.682399   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 17:38:06.446354   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:08.948005   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:05.760991   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:08.260189   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:10.260816   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:06.639416   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.140176   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.639682   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.140063   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.640014   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.139435   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.639256   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.139949   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.640283   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:11.139394   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.714195   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:38:06.750431   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:38:06.772702   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:38:06.793932   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:38:06.815601   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:38:06.837187   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:38:06.858175   79367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:38:06.876187   79367 ssh_runner.go:195] Run: openssl version
	I0814 17:38:06.881909   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:38:06.892057   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.896191   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.896251   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.901630   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:38:06.910888   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:38:06.920223   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.924480   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.924527   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.929591   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:38:06.939072   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:38:06.949970   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.954288   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.954339   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.959551   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:38:06.969505   79367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:38:06.973905   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:38:06.980211   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:38:06.986779   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:38:06.992220   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:38:06.997446   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:38:07.002681   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:38:07.008037   79367 kubeadm.go:392] StartCluster: {Name:no-preload-545149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:38:07.008131   79367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:38:07.008188   79367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:38:07.043144   79367 cri.go:89] found id: ""
	I0814 17:38:07.043214   79367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:38:07.052215   79367 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:38:07.052233   79367 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:38:07.052281   79367 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:38:07.060618   79367 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:38:07.061557   79367 kubeconfig.go:125] found "no-preload-545149" server: "https://192.168.39.162:8443"
	I0814 17:38:07.063554   79367 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:38:07.072026   79367 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I0814 17:38:07.072064   79367 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:38:07.072075   79367 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:38:07.072117   79367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:38:07.109349   79367 cri.go:89] found id: ""
	I0814 17:38:07.109412   79367 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:38:07.126888   79367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:38:07.138272   79367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:38:07.138293   79367 kubeadm.go:157] found existing configuration files:
	
	I0814 17:38:07.138367   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:38:07.147160   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:38:07.147220   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:38:07.156664   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:38:07.165122   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:38:07.165167   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:38:07.173478   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:38:07.181391   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:38:07.181449   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:38:07.189750   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:38:07.198215   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:38:07.198274   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:38:07.207384   79367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:38:07.216034   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:07.337710   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.227720   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.455979   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.521250   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.654574   79367 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:38:08.654684   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.155639   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.655182   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.696193   79367 api_server.go:72] duration metric: took 1.041620068s to wait for apiserver process to appear ...
	I0814 17:38:09.696223   79367 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:38:09.696241   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:09.696703   79367 api_server.go:269] stopped: https://192.168.39.162:8443/healthz: Get "https://192.168.39.162:8443/healthz": dial tcp 192.168.39.162:8443: connect: connection refused
	I0814 17:38:10.197180   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.389673   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:38:12.389702   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:38:12.389717   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.403106   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:38:12.403138   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:38:12.696486   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.700755   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:38:12.700784   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:38:13.196293   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:13.200564   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:38:13.200592   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:38:13.697253   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:13.705430   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I0814 17:38:13.732816   79367 api_server.go:141] control plane version: v1.31.0
	I0814 17:38:13.732843   79367 api_server.go:131] duration metric: took 4.036614106s to wait for apiserver health ...
	I0814 17:38:13.732852   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:38:13.732860   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:38:13.734904   79367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:38:11.444846   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:13.943583   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:12.759294   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:14.760919   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:11.640107   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.140034   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.639463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.139428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.639575   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.140005   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.639473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.140124   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.639459   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:16.139187   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.736533   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:38:13.756650   79367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:38:13.776947   79367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:38:13.803170   79367 system_pods.go:59] 8 kube-system pods found
	I0814 17:38:13.803214   79367 system_pods.go:61] "coredns-6f6b679f8f-tt46z" [169beaf0-0310-47d5-b212-9a81c6b3df68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:38:13.803228   79367 system_pods.go:61] "etcd-no-preload-545149" [47e22bb4-bedb-433f-ae2e-f281269c6e87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:38:13.803240   79367 system_pods.go:61] "kube-apiserver-no-preload-545149" [37854a66-b05b-49fe-834b-98f724087ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:38:13.803249   79367 system_pods.go:61] "kube-controller-manager-no-preload-545149" [69189ec1-6f8c-4613-bf47-46e101a14ecd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:38:13.803307   79367 system_pods.go:61] "kube-proxy-gfrqp" [2206243d-f6e0-462c-969c-60e192038700] Running
	I0814 17:38:13.803314   79367 system_pods.go:61] "kube-scheduler-no-preload-545149" [0bbd41bd-0a18-486b-b78c-9a0e9efe209a] Running
	I0814 17:38:13.803322   79367 system_pods.go:61] "metrics-server-6867b74b74-8c2cx" [b30f3018-f316-4997-a8fa-ff6c83aa7dd7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:38:13.803341   79367 system_pods.go:61] "storage-provisioner" [635027cc-ac5d-4474-a243-ef48b3580998] Running
	I0814 17:38:13.803349   79367 system_pods.go:74] duration metric: took 26.377795ms to wait for pod list to return data ...
	I0814 17:38:13.803357   79367 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:38:13.814093   79367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:38:13.814120   79367 node_conditions.go:123] node cpu capacity is 2
	I0814 17:38:13.814131   79367 node_conditions.go:105] duration metric: took 10.768606ms to run NodePressure ...
	I0814 17:38:13.814147   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:14.196481   79367 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:38:14.202205   79367 kubeadm.go:739] kubelet initialised
	I0814 17:38:14.202239   79367 kubeadm.go:740] duration metric: took 5.723699ms waiting for restarted kubelet to initialise ...
	I0814 17:38:14.202250   79367 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:38:14.209431   79367 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.215568   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.215597   79367 pod_ready.go:81] duration metric: took 6.13175ms for pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.215610   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.215620   79367 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.227611   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "etcd-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.227647   79367 pod_ready.go:81] duration metric: took 12.016107ms for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.227661   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "etcd-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.227669   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.235095   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "kube-apiserver-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.235130   79367 pod_ready.go:81] duration metric: took 7.452418ms for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.235143   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "kube-apiserver-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.235153   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.244417   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.244447   79367 pod_ready.go:81] duration metric: took 9.283911ms for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.244459   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.244466   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gfrqp" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.999946   79367 pod_ready.go:92] pod "kube-proxy-gfrqp" in "kube-system" namespace has status "Ready":"True"
	I0814 17:38:14.999968   79367 pod_ready.go:81] duration metric: took 755.491905ms for pod "kube-proxy-gfrqp" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.999977   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:15.945421   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:18.444758   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:16.761265   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:19.260117   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:16.639219   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.139463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.639839   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.140251   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.639890   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.139999   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.639652   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.139316   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.639809   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:21.139471   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.005796   79367 pod_ready.go:102] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:19.006769   79367 pod_ready.go:102] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:20.506792   79367 pod_ready.go:92] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:38:20.506815   79367 pod_ready.go:81] duration metric: took 5.50683258s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:20.506825   79367 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:20.445449   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:22.446622   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:24.943859   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:21.760870   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:23.761708   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:21.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.139292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.640151   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.139450   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.639996   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.139447   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.639267   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.139595   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.639451   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:26.140190   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.513577   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:25.012936   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.945216   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:29.444769   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.260276   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:28.263789   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.640120   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.140141   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.640184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.139896   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.140246   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.639895   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.139860   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.639358   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:31.140029   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.014354   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:29.516049   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:31.944967   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:34.444885   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:30.760174   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:33.259870   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:35.260137   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:31.639317   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.140039   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.139240   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.640181   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.139789   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.639297   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.139871   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.639347   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:36.140044   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.013464   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:34.513348   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.513741   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.944347   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:38.945374   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:37.759866   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:39.760334   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.640132   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.139254   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.639457   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.139928   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.639196   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:39.139906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:39.139980   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:39.179494   80228 cri.go:89] found id: ""
	I0814 17:38:39.179524   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.179535   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:39.179543   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:39.179619   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:39.210704   80228 cri.go:89] found id: ""
	I0814 17:38:39.210732   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.210741   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:39.210746   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:39.210796   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:39.247562   80228 cri.go:89] found id: ""
	I0814 17:38:39.247590   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.247597   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:39.247603   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:39.247665   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:39.281456   80228 cri.go:89] found id: ""
	I0814 17:38:39.281480   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.281488   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:39.281494   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:39.281553   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:39.318588   80228 cri.go:89] found id: ""
	I0814 17:38:39.318620   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.318630   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:39.318638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:39.318695   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:39.350270   80228 cri.go:89] found id: ""
	I0814 17:38:39.350294   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.350303   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:39.350311   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:39.350387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:39.382168   80228 cri.go:89] found id: ""
	I0814 17:38:39.382198   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.382209   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:39.382215   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:39.382325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:39.415307   80228 cri.go:89] found id: ""
	I0814 17:38:39.415342   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.415354   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:39.415375   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:39.415388   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:39.469591   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:39.469632   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:39.482909   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:39.482942   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:39.609874   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:39.609906   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:39.609921   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:39.683210   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:39.683253   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:39.013876   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:41.513527   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:41.444286   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:43.444539   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:42.260548   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:44.263171   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:42.222687   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:42.235017   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:42.235088   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:42.285518   80228 cri.go:89] found id: ""
	I0814 17:38:42.285544   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.285553   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:42.285559   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:42.285614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:42.320462   80228 cri.go:89] found id: ""
	I0814 17:38:42.320492   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.320500   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:42.320506   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:42.320594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:42.353484   80228 cri.go:89] found id: ""
	I0814 17:38:42.353515   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.353528   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:42.353537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:42.353614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:42.388122   80228 cri.go:89] found id: ""
	I0814 17:38:42.388152   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.388163   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:42.388171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:42.388239   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:42.420246   80228 cri.go:89] found id: ""
	I0814 17:38:42.420275   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.420285   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:42.420293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:42.420359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:42.454636   80228 cri.go:89] found id: ""
	I0814 17:38:42.454669   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.454680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:42.454687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:42.454749   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:42.494638   80228 cri.go:89] found id: ""
	I0814 17:38:42.494670   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.494679   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:42.494686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:42.494751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:42.532224   80228 cri.go:89] found id: ""
	I0814 17:38:42.532257   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.532269   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:42.532281   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:42.532296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:42.546100   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:42.546132   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:42.616561   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:42.616589   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:42.616604   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:42.697269   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:42.697305   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:42.737787   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:42.737821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.293788   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:45.309020   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:45.309080   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:45.349218   80228 cri.go:89] found id: ""
	I0814 17:38:45.349246   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.349254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:45.349260   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:45.349318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:45.387622   80228 cri.go:89] found id: ""
	I0814 17:38:45.387653   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.387664   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:45.387672   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:45.387750   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:45.422120   80228 cri.go:89] found id: ""
	I0814 17:38:45.422154   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.422164   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:45.422169   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:45.422226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:45.457309   80228 cri.go:89] found id: ""
	I0814 17:38:45.457337   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.457352   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:45.457361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:45.457412   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:45.488969   80228 cri.go:89] found id: ""
	I0814 17:38:45.489000   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.489011   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:45.489019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:45.489081   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:45.522230   80228 cri.go:89] found id: ""
	I0814 17:38:45.522258   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.522273   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:45.522280   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:45.522345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:45.555394   80228 cri.go:89] found id: ""
	I0814 17:38:45.555425   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.555440   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:45.555448   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:45.555520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:45.587870   80228 cri.go:89] found id: ""
	I0814 17:38:45.587899   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.587910   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:45.587934   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:45.587951   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.638662   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:45.638709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:45.652217   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:45.652248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:45.733611   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:45.733635   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:45.733648   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:45.822733   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:45.822773   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:44.013405   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:46.014164   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:45.445289   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:47.944672   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:46.760279   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:49.260108   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:48.361519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:48.374848   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:48.374916   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:48.410849   80228 cri.go:89] found id: ""
	I0814 17:38:48.410897   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.410911   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:48.410920   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:48.410986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:48.448507   80228 cri.go:89] found id: ""
	I0814 17:38:48.448530   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.448537   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:48.448543   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:48.448594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:48.486257   80228 cri.go:89] found id: ""
	I0814 17:38:48.486298   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.486306   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:48.486312   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:48.486363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:48.520447   80228 cri.go:89] found id: ""
	I0814 17:38:48.520473   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.520482   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:48.520487   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:48.520544   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:48.552659   80228 cri.go:89] found id: ""
	I0814 17:38:48.552690   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.552698   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:48.552704   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:48.552768   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:48.585302   80228 cri.go:89] found id: ""
	I0814 17:38:48.585331   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.585341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:48.585348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:48.585415   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:48.617388   80228 cri.go:89] found id: ""
	I0814 17:38:48.617417   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.617428   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:48.617435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:48.617503   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:48.658987   80228 cri.go:89] found id: ""
	I0814 17:38:48.659012   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.659019   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:48.659027   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:48.659041   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:48.719882   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:48.719915   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:48.738962   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:48.738994   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:48.807703   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:48.807727   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:48.807739   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:48.886555   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:48.886585   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:48.514199   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.013627   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:50.444135   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:52.444957   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:54.446434   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.760518   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:54.260283   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.423653   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:51.436700   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:51.436792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:51.473198   80228 cri.go:89] found id: ""
	I0814 17:38:51.473227   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.473256   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:51.473262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:51.473311   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:51.508631   80228 cri.go:89] found id: ""
	I0814 17:38:51.508664   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.508675   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:51.508682   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:51.508743   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:51.540917   80228 cri.go:89] found id: ""
	I0814 17:38:51.540950   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.540958   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:51.540965   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:51.541014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:51.578112   80228 cri.go:89] found id: ""
	I0814 17:38:51.578140   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.578150   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:51.578158   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:51.578220   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:51.612664   80228 cri.go:89] found id: ""
	I0814 17:38:51.612692   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.612700   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:51.612706   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:51.612756   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:51.646374   80228 cri.go:89] found id: ""
	I0814 17:38:51.646399   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.646407   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:51.646413   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:51.646463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:51.682052   80228 cri.go:89] found id: ""
	I0814 17:38:51.682081   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.682092   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:51.682098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:51.682147   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:51.722625   80228 cri.go:89] found id: ""
	I0814 17:38:51.722653   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.722663   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:51.722674   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:51.722687   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:51.771788   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:51.771818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:51.785403   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:51.785432   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:51.854249   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:51.854269   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:51.854281   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:51.938121   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:51.938155   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.475672   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:54.491309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:54.491399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:54.524971   80228 cri.go:89] found id: ""
	I0814 17:38:54.525001   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.525011   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:54.525023   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:54.525087   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:54.560631   80228 cri.go:89] found id: ""
	I0814 17:38:54.560661   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.560670   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:54.560675   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:54.560728   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:54.595710   80228 cri.go:89] found id: ""
	I0814 17:38:54.595740   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.595751   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:54.595759   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:54.595824   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:54.631449   80228 cri.go:89] found id: ""
	I0814 17:38:54.631476   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.631487   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:54.631495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:54.631557   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:54.666492   80228 cri.go:89] found id: ""
	I0814 17:38:54.666526   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.666539   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:54.666548   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:54.666617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:54.701111   80228 cri.go:89] found id: ""
	I0814 17:38:54.701146   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.701158   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:54.701166   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:54.701226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:54.737535   80228 cri.go:89] found id: ""
	I0814 17:38:54.737574   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.737585   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:54.737595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:54.737653   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:54.771658   80228 cri.go:89] found id: ""
	I0814 17:38:54.771679   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.771686   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:54.771694   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:54.771709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:54.841798   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:54.841817   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:54.841829   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:54.930861   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:54.930917   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.970508   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:54.970540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:55.023077   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:55.023123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:53.513137   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.014005   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.945376   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:59.445437   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.260436   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:58.759613   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:57.538876   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:57.551796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:57.551868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:57.584576   80228 cri.go:89] found id: ""
	I0814 17:38:57.584601   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.584609   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:57.584617   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:57.584687   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:57.617209   80228 cri.go:89] found id: ""
	I0814 17:38:57.617239   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.617249   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:57.617257   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:57.617338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:57.650062   80228 cri.go:89] found id: ""
	I0814 17:38:57.650089   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.650096   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:57.650102   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:57.650160   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:57.681118   80228 cri.go:89] found id: ""
	I0814 17:38:57.681146   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.681154   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:57.681160   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:57.681228   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:57.713803   80228 cri.go:89] found id: ""
	I0814 17:38:57.713834   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.713842   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:57.713851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:57.713920   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:57.749555   80228 cri.go:89] found id: ""
	I0814 17:38:57.749594   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.749604   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:57.749613   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:57.749677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:57.782714   80228 cri.go:89] found id: ""
	I0814 17:38:57.782744   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.782755   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:57.782763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:57.782826   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:57.815386   80228 cri.go:89] found id: ""
	I0814 17:38:57.815414   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.815423   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:57.815436   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:57.815450   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:57.868153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:57.868183   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:57.881022   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:57.881053   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:57.950474   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:57.950501   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:57.950515   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:58.032611   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:58.032644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:00.569493   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:00.583257   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:00.583384   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:00.614680   80228 cri.go:89] found id: ""
	I0814 17:39:00.614712   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.614723   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:00.614732   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:00.614792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:00.648161   80228 cri.go:89] found id: ""
	I0814 17:39:00.648189   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.648196   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:00.648203   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:00.648256   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:00.681844   80228 cri.go:89] found id: ""
	I0814 17:39:00.681872   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.681883   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:00.681890   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:00.681952   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:00.714773   80228 cri.go:89] found id: ""
	I0814 17:39:00.714804   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.714815   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:00.714823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:00.714891   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:00.747748   80228 cri.go:89] found id: ""
	I0814 17:39:00.747774   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.747781   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:00.747787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:00.747845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:00.783135   80228 cri.go:89] found id: ""
	I0814 17:39:00.783168   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.783179   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:00.783186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:00.783242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:00.817505   80228 cri.go:89] found id: ""
	I0814 17:39:00.817541   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.817552   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:00.817567   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:00.817633   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:00.849205   80228 cri.go:89] found id: ""
	I0814 17:39:00.849231   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.849241   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:00.849252   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:00.849273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:00.902529   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:00.902567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:00.916313   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:00.916346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:00.988708   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:00.988725   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:00.988737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:01.063818   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:01.063853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:58.512313   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:00.513694   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:01.944987   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.945640   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:00.759979   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.259928   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.603241   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:03.616400   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:03.616504   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:03.649580   80228 cri.go:89] found id: ""
	I0814 17:39:03.649619   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.649637   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:03.649650   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:03.649718   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:03.686252   80228 cri.go:89] found id: ""
	I0814 17:39:03.686274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.686282   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:03.686289   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:03.686349   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:03.720995   80228 cri.go:89] found id: ""
	I0814 17:39:03.721024   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.721036   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:03.721043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:03.721094   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:03.753466   80228 cri.go:89] found id: ""
	I0814 17:39:03.753491   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.753500   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:03.753506   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:03.753554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:03.794427   80228 cri.go:89] found id: ""
	I0814 17:39:03.794450   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.794458   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:03.794464   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:03.794524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:03.826245   80228 cri.go:89] found id: ""
	I0814 17:39:03.826274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.826282   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:03.826288   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:03.826355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:03.857208   80228 cri.go:89] found id: ""
	I0814 17:39:03.857238   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.857247   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:03.857253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:03.857325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:03.892840   80228 cri.go:89] found id: ""
	I0814 17:39:03.892864   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.892871   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:03.892879   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:03.892891   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:03.948554   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:03.948579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:03.962222   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:03.962249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:04.031833   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:04.031859   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:04.031875   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:04.109572   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:04.109636   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:03.013542   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:05.513201   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:06.444222   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:08.444833   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:05.759653   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:07.760063   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.259961   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:06.646923   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:06.659699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:06.659757   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:06.691918   80228 cri.go:89] found id: ""
	I0814 17:39:06.691941   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.691951   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:06.691958   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:06.692016   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:06.722675   80228 cri.go:89] found id: ""
	I0814 17:39:06.722703   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.722713   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:06.722720   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:06.722782   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:06.757215   80228 cri.go:89] found id: ""
	I0814 17:39:06.757248   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.757259   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:06.757266   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:06.757333   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:06.791337   80228 cri.go:89] found id: ""
	I0814 17:39:06.791370   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.791378   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:06.791384   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:06.791440   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:06.825182   80228 cri.go:89] found id: ""
	I0814 17:39:06.825209   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.825220   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:06.825234   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:06.825288   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:06.857473   80228 cri.go:89] found id: ""
	I0814 17:39:06.857498   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.857507   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:06.857514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:06.857582   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:06.891293   80228 cri.go:89] found id: ""
	I0814 17:39:06.891343   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.891355   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:06.891363   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:06.891421   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:06.927476   80228 cri.go:89] found id: ""
	I0814 17:39:06.927505   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.927516   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:06.927527   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:06.927541   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:06.980604   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:06.980635   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:06.994648   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:06.994678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:07.072554   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:07.072580   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:07.072599   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:07.153141   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:07.153182   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:09.693348   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:09.705754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:09.705814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:09.739674   80228 cri.go:89] found id: ""
	I0814 17:39:09.739706   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.739717   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:09.739724   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:09.739788   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:09.774381   80228 cri.go:89] found id: ""
	I0814 17:39:09.774405   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.774413   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:09.774420   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:09.774478   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:09.806586   80228 cri.go:89] found id: ""
	I0814 17:39:09.806614   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.806623   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:09.806629   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:09.806696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:09.839564   80228 cri.go:89] found id: ""
	I0814 17:39:09.839594   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.839602   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:09.839614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:09.839672   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:09.872338   80228 cri.go:89] found id: ""
	I0814 17:39:09.872373   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.872385   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:09.872393   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:09.872457   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:09.904184   80228 cri.go:89] found id: ""
	I0814 17:39:09.904223   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.904231   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:09.904253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:09.904312   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:09.937217   80228 cri.go:89] found id: ""
	I0814 17:39:09.937242   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.937251   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:09.937259   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:09.937322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:09.972273   80228 cri.go:89] found id: ""
	I0814 17:39:09.972301   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.972313   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:09.972325   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:09.972341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:10.023736   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:10.023764   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:10.036654   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:10.036681   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:10.104031   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:10.104052   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:10.104068   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:10.187816   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:10.187853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:08.013632   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.513090   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.944491   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.945542   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.260129   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:14.759867   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.727237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:12.741970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:12.742041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:12.778721   80228 cri.go:89] found id: ""
	I0814 17:39:12.778748   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.778758   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:12.778765   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:12.778820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:12.812575   80228 cri.go:89] found id: ""
	I0814 17:39:12.812603   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.812610   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:12.812619   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:12.812678   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:12.845697   80228 cri.go:89] found id: ""
	I0814 17:39:12.845726   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.845737   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:12.845744   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:12.845809   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:12.879491   80228 cri.go:89] found id: ""
	I0814 17:39:12.879518   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.879529   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:12.879536   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:12.879604   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:12.912321   80228 cri.go:89] found id: ""
	I0814 17:39:12.912348   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.912356   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:12.912361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:12.912410   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:12.948866   80228 cri.go:89] found id: ""
	I0814 17:39:12.948889   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.948897   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:12.948903   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:12.948963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:12.983394   80228 cri.go:89] found id: ""
	I0814 17:39:12.983444   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.983459   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:12.983466   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:12.983530   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:13.018406   80228 cri.go:89] found id: ""
	I0814 17:39:13.018427   80228 logs.go:276] 0 containers: []
	W0814 17:39:13.018434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:13.018442   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:13.018457   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.069615   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:13.069655   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:13.082618   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:13.082651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:13.145033   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:13.145054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:13.145067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:13.225074   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:13.225108   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:15.765512   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:15.778320   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:15.778380   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:15.812847   80228 cri.go:89] found id: ""
	I0814 17:39:15.812876   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.812885   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:15.812896   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:15.812944   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:15.845131   80228 cri.go:89] found id: ""
	I0814 17:39:15.845159   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.845169   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:15.845176   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:15.845242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:15.879763   80228 cri.go:89] found id: ""
	I0814 17:39:15.879789   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.879799   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:15.879807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:15.879864   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:15.912746   80228 cri.go:89] found id: ""
	I0814 17:39:15.912776   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.912784   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:15.912797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:15.912858   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:15.946433   80228 cri.go:89] found id: ""
	I0814 17:39:15.946456   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.946465   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:15.946473   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:15.946534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:15.980060   80228 cri.go:89] found id: ""
	I0814 17:39:15.980086   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.980096   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:15.980103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:15.980167   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:16.011539   80228 cri.go:89] found id: ""
	I0814 17:39:16.011570   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.011581   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:16.011590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:16.011660   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:16.046019   80228 cri.go:89] found id: ""
	I0814 17:39:16.046046   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.046057   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:16.046068   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:16.046083   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:16.058442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:16.058470   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:16.132775   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:16.132799   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:16.132811   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:16.218360   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:16.218398   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:16.258070   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:16.258096   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.013275   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:15.013967   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:15.444280   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:17.444827   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.943845   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:16.760773   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.259891   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:18.813127   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:18.826187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:18.826267   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:18.858405   80228 cri.go:89] found id: ""
	I0814 17:39:18.858433   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.858444   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:18.858452   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:18.858524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:18.893302   80228 cri.go:89] found id: ""
	I0814 17:39:18.893335   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.893342   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:18.893350   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:18.893417   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:18.929885   80228 cri.go:89] found id: ""
	I0814 17:39:18.929919   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.929929   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:18.929937   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:18.930000   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:18.966758   80228 cri.go:89] found id: ""
	I0814 17:39:18.966783   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.966792   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:18.966799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:18.966861   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:18.999815   80228 cri.go:89] found id: ""
	I0814 17:39:18.999838   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.999845   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:18.999851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:18.999915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:19.033737   80228 cri.go:89] found id: ""
	I0814 17:39:19.033761   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.033768   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:19.033774   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:19.033830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:19.070080   80228 cri.go:89] found id: ""
	I0814 17:39:19.070105   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.070113   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:19.070119   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:19.070190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:19.102868   80228 cri.go:89] found id: ""
	I0814 17:39:19.102897   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.102907   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:19.102918   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:19.102932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:19.156525   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:19.156569   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:19.170193   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:19.170225   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:19.236521   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:19.236547   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:19.236561   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:19.315984   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:19.316024   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:17.512553   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.513046   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.513082   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:22.444948   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:24.945111   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.260362   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:23.260567   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.855554   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:21.868457   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:21.868527   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:21.902098   80228 cri.go:89] found id: ""
	I0814 17:39:21.902124   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.902132   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:21.902139   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:21.902200   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:21.934876   80228 cri.go:89] found id: ""
	I0814 17:39:21.934908   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.934919   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:21.934926   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:21.934987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:21.976507   80228 cri.go:89] found id: ""
	I0814 17:39:21.976536   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.976548   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:21.976555   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:21.976617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:22.013876   80228 cri.go:89] found id: ""
	I0814 17:39:22.013897   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.013904   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:22.013909   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:22.013955   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:22.051943   80228 cri.go:89] found id: ""
	I0814 17:39:22.051969   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.051979   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:22.051999   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:22.052064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:22.084702   80228 cri.go:89] found id: ""
	I0814 17:39:22.084725   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.084733   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:22.084738   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:22.084784   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:22.117397   80228 cri.go:89] found id: ""
	I0814 17:39:22.117424   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.117432   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:22.117439   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:22.117490   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:22.154139   80228 cri.go:89] found id: ""
	I0814 17:39:22.154168   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.154178   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:22.154189   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:22.154201   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:22.205550   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:22.205580   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:22.219644   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:22.219679   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:22.288934   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:22.288957   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:22.288969   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:22.372917   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:22.372954   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:24.912578   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:24.925365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:24.925430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:24.961207   80228 cri.go:89] found id: ""
	I0814 17:39:24.961234   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.961248   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:24.961257   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:24.961339   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:24.998878   80228 cri.go:89] found id: ""
	I0814 17:39:24.998904   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.998911   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:24.998918   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:24.998971   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:25.034141   80228 cri.go:89] found id: ""
	I0814 17:39:25.034174   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.034187   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:25.034196   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:25.034274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:25.075634   80228 cri.go:89] found id: ""
	I0814 17:39:25.075667   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.075679   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:25.075688   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:25.075759   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:25.112890   80228 cri.go:89] found id: ""
	I0814 17:39:25.112929   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.112939   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:25.112946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:25.113007   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:25.152887   80228 cri.go:89] found id: ""
	I0814 17:39:25.152913   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.152921   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:25.152927   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:25.152987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:25.186421   80228 cri.go:89] found id: ""
	I0814 17:39:25.186452   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.186463   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:25.186471   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:25.186537   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:25.220390   80228 cri.go:89] found id: ""
	I0814 17:39:25.220417   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.220425   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:25.220432   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:25.220446   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:25.296112   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:25.296146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:25.335421   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:25.335449   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:25.387690   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:25.387718   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:25.401926   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:25.401957   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:25.471111   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:24.012534   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:26.513529   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.445280   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:29.445416   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:25.759098   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.759924   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:30.259610   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.972237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:27.985512   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:27.985575   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:28.019454   80228 cri.go:89] found id: ""
	I0814 17:39:28.019482   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.019493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:28.019502   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:28.019566   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:28.056908   80228 cri.go:89] found id: ""
	I0814 17:39:28.056931   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.056939   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:28.056944   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:28.056998   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:28.090678   80228 cri.go:89] found id: ""
	I0814 17:39:28.090707   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.090715   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:28.090721   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:28.090785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:28.125557   80228 cri.go:89] found id: ""
	I0814 17:39:28.125591   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.125609   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:28.125620   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:28.125682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:28.158092   80228 cri.go:89] found id: ""
	I0814 17:39:28.158121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.158129   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:28.158135   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:28.158191   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:28.193403   80228 cri.go:89] found id: ""
	I0814 17:39:28.193434   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.193445   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:28.193454   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:28.193524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:28.231095   80228 cri.go:89] found id: ""
	I0814 17:39:28.231121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.231131   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:28.231139   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:28.231203   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:28.280157   80228 cri.go:89] found id: ""
	I0814 17:39:28.280185   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.280196   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:28.280207   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:28.280220   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:28.352877   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:28.352894   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:28.352906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:28.439692   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:28.439736   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:28.479986   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:28.480012   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:28.538454   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:28.538493   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.052941   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:31.065810   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:31.065879   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:31.097988   80228 cri.go:89] found id: ""
	I0814 17:39:31.098013   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.098020   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:31.098045   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:31.098102   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:31.130126   80228 cri.go:89] found id: ""
	I0814 17:39:31.130152   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.130160   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:31.130166   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:31.130225   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:31.165945   80228 cri.go:89] found id: ""
	I0814 17:39:31.165984   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.165995   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:31.166003   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:31.166064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:31.199749   80228 cri.go:89] found id: ""
	I0814 17:39:31.199772   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.199778   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:31.199784   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:31.199843   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:31.231398   80228 cri.go:89] found id: ""
	I0814 17:39:31.231425   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.231436   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:31.231444   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:31.231528   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:31.263842   80228 cri.go:89] found id: ""
	I0814 17:39:31.263868   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.263878   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:31.263885   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:31.263950   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:31.299258   80228 cri.go:89] found id: ""
	I0814 17:39:31.299289   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.299301   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:31.299309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:31.299399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:29.013468   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.013638   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.445769   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:33.944939   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:32.260117   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:34.262303   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.332626   80228 cri.go:89] found id: ""
	I0814 17:39:31.332649   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.332657   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:31.332666   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:31.332678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:31.369262   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:31.369288   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:31.426003   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:31.426034   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.439583   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:31.439611   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:31.507538   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:31.507563   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:31.507583   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.085342   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:34.097491   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:34.097567   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:34.129220   80228 cri.go:89] found id: ""
	I0814 17:39:34.129244   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.129254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:34.129262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:34.129322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:34.161233   80228 cri.go:89] found id: ""
	I0814 17:39:34.161256   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.161264   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:34.161270   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:34.161334   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:34.193649   80228 cri.go:89] found id: ""
	I0814 17:39:34.193675   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.193683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:34.193689   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:34.193754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:34.226722   80228 cri.go:89] found id: ""
	I0814 17:39:34.226753   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.226763   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:34.226772   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:34.226842   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:34.259735   80228 cri.go:89] found id: ""
	I0814 17:39:34.259761   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.259774   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:34.259787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:34.259850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:34.296804   80228 cri.go:89] found id: ""
	I0814 17:39:34.296830   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.296838   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:34.296844   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:34.296894   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:34.328942   80228 cri.go:89] found id: ""
	I0814 17:39:34.328973   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.328982   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:34.328988   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:34.329041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:34.360820   80228 cri.go:89] found id: ""
	I0814 17:39:34.360847   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.360858   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:34.360868   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:34.360882   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:34.411106   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:34.411142   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:34.424737   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:34.424769   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:34.489094   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:34.489122   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:34.489138   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.569783   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:34.569818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:33.015308   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:35.513073   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:35.945264   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:38.444913   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:36.760740   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:39.260499   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:37.107492   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:37.120829   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:37.120901   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:37.154556   80228 cri.go:89] found id: ""
	I0814 17:39:37.154589   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.154601   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:37.154609   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:37.154673   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:37.192570   80228 cri.go:89] found id: ""
	I0814 17:39:37.192602   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.192609   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:37.192615   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:37.192679   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:37.225845   80228 cri.go:89] found id: ""
	I0814 17:39:37.225891   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.225902   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:37.225917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:37.225986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:37.262370   80228 cri.go:89] found id: ""
	I0814 17:39:37.262399   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.262408   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:37.262416   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:37.262481   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:37.297642   80228 cri.go:89] found id: ""
	I0814 17:39:37.297669   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.297680   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:37.297687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:37.297754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:37.331006   80228 cri.go:89] found id: ""
	I0814 17:39:37.331032   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.331041   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:37.331046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:37.331111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:37.364753   80228 cri.go:89] found id: ""
	I0814 17:39:37.364777   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.364786   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:37.364792   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:37.364850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:37.397722   80228 cri.go:89] found id: ""
	I0814 17:39:37.397748   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.397760   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:37.397770   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:37.397785   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:37.473616   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:37.473643   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:37.473659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:37.557672   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:37.557710   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.596337   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:37.596368   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:37.646815   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:37.646853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.160391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:40.174099   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:40.174181   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:40.208783   80228 cri.go:89] found id: ""
	I0814 17:39:40.208814   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.208821   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:40.208829   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:40.208880   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:40.243555   80228 cri.go:89] found id: ""
	I0814 17:39:40.243580   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.243588   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:40.243594   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:40.243661   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:40.276685   80228 cri.go:89] found id: ""
	I0814 17:39:40.276711   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.276723   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:40.276731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:40.276795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:40.309893   80228 cri.go:89] found id: ""
	I0814 17:39:40.309925   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.309937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:40.309944   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:40.310073   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:40.341724   80228 cri.go:89] found id: ""
	I0814 17:39:40.341751   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.341762   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:40.341770   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:40.341834   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:40.376442   80228 cri.go:89] found id: ""
	I0814 17:39:40.376478   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.376487   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:40.376495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:40.376558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:40.419240   80228 cri.go:89] found id: ""
	I0814 17:39:40.419269   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.419277   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:40.419284   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:40.419374   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:40.464678   80228 cri.go:89] found id: ""
	I0814 17:39:40.464703   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.464712   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:40.464721   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:40.464737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:40.531138   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:40.531175   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.546809   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:40.546842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:40.618791   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:40.618809   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:40.618821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:40.706169   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:40.706219   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.513604   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:40.013349   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:40.445989   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:42.944417   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:41.261429   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:43.760436   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:43.250987   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:43.266109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:43.266179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:43.301860   80228 cri.go:89] found id: ""
	I0814 17:39:43.301891   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.301899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:43.301908   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:43.301991   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:43.337166   80228 cri.go:89] found id: ""
	I0814 17:39:43.337195   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.337205   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:43.337212   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:43.337262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:43.370640   80228 cri.go:89] found id: ""
	I0814 17:39:43.370671   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.370683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:43.370696   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:43.370752   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:43.405598   80228 cri.go:89] found id: ""
	I0814 17:39:43.405624   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.405632   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:43.405638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:43.405705   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:43.437161   80228 cri.go:89] found id: ""
	I0814 17:39:43.437184   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.437192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:43.437198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:43.437295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:43.470675   80228 cri.go:89] found id: ""
	I0814 17:39:43.470707   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.470718   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:43.470726   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:43.470787   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:43.503036   80228 cri.go:89] found id: ""
	I0814 17:39:43.503062   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.503073   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:43.503081   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:43.503149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:43.538269   80228 cri.go:89] found id: ""
	I0814 17:39:43.538296   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.538304   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:43.538328   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:43.538340   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:43.621889   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:43.621936   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:43.667460   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:43.667491   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:43.723630   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:43.723663   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:43.738905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:43.738939   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:43.805484   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:46.306031   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:42.512438   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:44.513112   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.513203   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:45.445470   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:47.944790   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.260236   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:48.260662   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.324624   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:46.324696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:46.360039   80228 cri.go:89] found id: ""
	I0814 17:39:46.360066   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.360074   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:46.360082   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:46.360131   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:46.413735   80228 cri.go:89] found id: ""
	I0814 17:39:46.413767   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.413779   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:46.413788   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:46.413876   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:46.458823   80228 cri.go:89] found id: ""
	I0814 17:39:46.458851   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.458861   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:46.458869   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:46.458928   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:46.495347   80228 cri.go:89] found id: ""
	I0814 17:39:46.495378   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.495387   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:46.495392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:46.495441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:46.531502   80228 cri.go:89] found id: ""
	I0814 17:39:46.531533   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.531545   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:46.531554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:46.531624   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:46.564450   80228 cri.go:89] found id: ""
	I0814 17:39:46.564473   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.564482   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:46.564488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:46.564535   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:46.598293   80228 cri.go:89] found id: ""
	I0814 17:39:46.598401   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.598421   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:46.598431   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:46.598498   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:46.632370   80228 cri.go:89] found id: ""
	I0814 17:39:46.632400   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.632411   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:46.632423   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:46.632438   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:46.711814   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:46.711848   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:46.749410   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:46.749443   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:46.801686   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:46.801720   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:46.815196   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:46.815218   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:46.885648   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.386223   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:49.399359   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:49.399430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:49.432133   80228 cri.go:89] found id: ""
	I0814 17:39:49.432168   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.432179   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:49.432186   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:49.432250   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:49.469760   80228 cri.go:89] found id: ""
	I0814 17:39:49.469790   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.469799   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:49.469811   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:49.469873   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:49.500437   80228 cri.go:89] found id: ""
	I0814 17:39:49.500466   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.500474   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:49.500481   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:49.500531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:49.533685   80228 cri.go:89] found id: ""
	I0814 17:39:49.533709   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.533717   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:49.533723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:49.533790   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:49.570551   80228 cri.go:89] found id: ""
	I0814 17:39:49.570577   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.570584   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:49.570590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:49.570654   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:49.606649   80228 cri.go:89] found id: ""
	I0814 17:39:49.606672   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.606680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:49.606686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:49.606734   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:49.638060   80228 cri.go:89] found id: ""
	I0814 17:39:49.638090   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.638101   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:49.638109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:49.638178   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:49.674503   80228 cri.go:89] found id: ""
	I0814 17:39:49.674526   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.674534   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:49.674543   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:49.674563   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:49.710185   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:49.710213   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:49.764112   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:49.764146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:49.777862   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:49.777888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:49.849786   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.849806   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:49.849819   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:48.513418   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:51.013242   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:50.444526   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.444788   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:54.944646   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:50.759890   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.760236   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:54.760324   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.429811   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:52.444364   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:52.444441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:52.483047   80228 cri.go:89] found id: ""
	I0814 17:39:52.483074   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.483085   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:52.483093   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:52.483157   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:52.520236   80228 cri.go:89] found id: ""
	I0814 17:39:52.520264   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.520274   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:52.520287   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:52.520353   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:52.553757   80228 cri.go:89] found id: ""
	I0814 17:39:52.553784   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.553795   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:52.553802   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:52.553869   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:52.588782   80228 cri.go:89] found id: ""
	I0814 17:39:52.588808   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.588818   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:52.588827   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:52.588893   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:52.620144   80228 cri.go:89] found id: ""
	I0814 17:39:52.620180   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.620192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:52.620201   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:52.620274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:52.652712   80228 cri.go:89] found id: ""
	I0814 17:39:52.652743   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.652755   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:52.652763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:52.652825   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:52.687789   80228 cri.go:89] found id: ""
	I0814 17:39:52.687819   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.687831   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:52.687838   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:52.687892   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:52.718996   80228 cri.go:89] found id: ""
	I0814 17:39:52.719021   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.719031   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:52.719041   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:52.719055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:52.775775   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:52.775808   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:52.789024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:52.789055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:52.863320   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:52.863351   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:52.863366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:52.941533   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:52.941571   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:55.477833   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:55.490723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:55.490783   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:55.525816   80228 cri.go:89] found id: ""
	I0814 17:39:55.525844   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.525852   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:55.525859   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:55.525908   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:55.561855   80228 cri.go:89] found id: ""
	I0814 17:39:55.561878   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.561887   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:55.561892   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:55.561949   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:55.599997   80228 cri.go:89] found id: ""
	I0814 17:39:55.600027   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.600038   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:55.600046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:55.600112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:55.632869   80228 cri.go:89] found id: ""
	I0814 17:39:55.632902   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.632914   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:55.632922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:55.632990   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:55.666029   80228 cri.go:89] found id: ""
	I0814 17:39:55.666055   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.666066   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:55.666079   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:55.666136   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:55.697222   80228 cri.go:89] found id: ""
	I0814 17:39:55.697247   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.697254   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:55.697260   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:55.697308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:55.729517   80228 cri.go:89] found id: ""
	I0814 17:39:55.729549   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.729561   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:55.729576   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:55.729640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:55.763890   80228 cri.go:89] found id: ""
	I0814 17:39:55.763922   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.763934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:55.763944   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:55.763960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:55.819588   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:55.819624   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:55.833281   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:55.833314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:55.904610   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:55.904632   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:55.904644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:55.981035   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:55.981069   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:53.513407   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:55.513734   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:56.945649   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:59.444937   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:57.259832   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:59.760669   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:58.522870   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:58.536151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:58.536224   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:58.568827   80228 cri.go:89] found id: ""
	I0814 17:39:58.568857   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.568869   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:58.568877   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:58.568946   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:58.600523   80228 cri.go:89] found id: ""
	I0814 17:39:58.600554   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.600564   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:58.600571   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:58.600640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:58.634201   80228 cri.go:89] found id: ""
	I0814 17:39:58.634232   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.634240   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:58.634245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:58.634308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:58.668746   80228 cri.go:89] found id: ""
	I0814 17:39:58.668772   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.668781   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:58.668787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:58.668847   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:58.699695   80228 cri.go:89] found id: ""
	I0814 17:39:58.699727   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.699739   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:58.699752   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:58.699836   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:58.731047   80228 cri.go:89] found id: ""
	I0814 17:39:58.731081   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.731095   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:58.731103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:58.731168   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:58.773454   80228 cri.go:89] found id: ""
	I0814 17:39:58.773486   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.773495   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:58.773501   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:58.773561   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:58.810135   80228 cri.go:89] found id: ""
	I0814 17:39:58.810159   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.810166   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:58.810175   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:58.810191   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:58.844897   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:58.844925   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:58.901700   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:58.901745   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:58.914272   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:58.914296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:58.984593   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:58.984610   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:58.984622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:57.513854   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:00.013241   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:01.945861   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.444575   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:02.262241   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.760164   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:01.563227   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:01.576764   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:01.576840   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:01.610842   80228 cri.go:89] found id: ""
	I0814 17:40:01.610871   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.610878   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:01.610884   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:01.610935   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:01.643774   80228 cri.go:89] found id: ""
	I0814 17:40:01.643806   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.643816   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:01.643824   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:01.643888   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:01.677867   80228 cri.go:89] found id: ""
	I0814 17:40:01.677892   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.677899   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:01.677906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:01.677967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:01.712394   80228 cri.go:89] found id: ""
	I0814 17:40:01.712420   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.712427   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:01.712433   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:01.712492   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:01.745637   80228 cri.go:89] found id: ""
	I0814 17:40:01.745666   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.745676   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:01.745683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:01.745745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:01.782364   80228 cri.go:89] found id: ""
	I0814 17:40:01.782394   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.782404   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:01.782411   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:01.782484   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:01.814569   80228 cri.go:89] found id: ""
	I0814 17:40:01.814596   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.814605   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:01.814614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:01.814674   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:01.850421   80228 cri.go:89] found id: ""
	I0814 17:40:01.850450   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.850459   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:01.850468   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:01.850482   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:01.862965   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:01.863001   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:01.931312   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:01.931357   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:01.931375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:02.008236   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:02.008278   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:02.043238   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:02.043267   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:04.596909   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:04.610091   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:04.610158   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:04.645169   80228 cri.go:89] found id: ""
	I0814 17:40:04.645195   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.645205   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:04.645213   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:04.645279   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:04.677708   80228 cri.go:89] found id: ""
	I0814 17:40:04.677740   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.677750   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:04.677761   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:04.677823   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:04.710319   80228 cri.go:89] found id: ""
	I0814 17:40:04.710351   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.710362   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:04.710374   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:04.710443   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:04.745166   80228 cri.go:89] found id: ""
	I0814 17:40:04.745202   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.745219   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:04.745226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:04.745287   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:04.777307   80228 cri.go:89] found id: ""
	I0814 17:40:04.777354   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.777376   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:04.777383   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:04.777447   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:04.813854   80228 cri.go:89] found id: ""
	I0814 17:40:04.813886   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.813901   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:04.813908   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:04.813972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:04.848014   80228 cri.go:89] found id: ""
	I0814 17:40:04.848041   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.848049   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:04.848055   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:04.848113   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:04.882689   80228 cri.go:89] found id: ""
	I0814 17:40:04.882719   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.882731   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:04.882742   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:04.882760   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:04.952074   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:04.952096   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:04.952112   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:05.030258   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:05.030300   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:05.066509   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:05.066542   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:05.120153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:05.120195   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:02.512935   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.513254   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:06.445637   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:08.945142   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:06.760223   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:08.760857   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:07.634404   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:07.646900   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:07.646966   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:07.678654   80228 cri.go:89] found id: ""
	I0814 17:40:07.678680   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.678689   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:07.678696   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:07.678753   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:07.711355   80228 cri.go:89] found id: ""
	I0814 17:40:07.711381   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.711389   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:07.711395   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:07.711446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:07.744134   80228 cri.go:89] found id: ""
	I0814 17:40:07.744161   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.744169   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:07.744179   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:07.744242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:07.776981   80228 cri.go:89] found id: ""
	I0814 17:40:07.777008   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.777015   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:07.777022   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:07.777086   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:07.811626   80228 cri.go:89] found id: ""
	I0814 17:40:07.811651   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.811661   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:07.811667   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:07.811720   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:07.843218   80228 cri.go:89] found id: ""
	I0814 17:40:07.843251   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.843262   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:07.843270   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:07.843355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:07.875208   80228 cri.go:89] found id: ""
	I0814 17:40:07.875232   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.875239   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:07.875245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:07.875295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:07.907896   80228 cri.go:89] found id: ""
	I0814 17:40:07.907923   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.907934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:07.907945   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:07.907960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:07.959717   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:07.959753   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:07.973050   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:07.973081   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:08.035085   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:08.035107   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:08.035120   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:08.109722   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:08.109770   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:10.648203   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:10.661194   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:10.661280   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:10.698401   80228 cri.go:89] found id: ""
	I0814 17:40:10.698431   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.698442   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:10.698450   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:10.698515   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:10.730057   80228 cri.go:89] found id: ""
	I0814 17:40:10.730083   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.730094   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:10.730101   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:10.730163   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:10.768780   80228 cri.go:89] found id: ""
	I0814 17:40:10.768807   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.768817   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:10.768824   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:10.768885   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:10.800866   80228 cri.go:89] found id: ""
	I0814 17:40:10.800898   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.800907   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:10.800917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:10.800984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:10.833741   80228 cri.go:89] found id: ""
	I0814 17:40:10.833771   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.833782   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:10.833789   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:10.833850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:10.865670   80228 cri.go:89] found id: ""
	I0814 17:40:10.865699   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.865706   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:10.865717   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:10.865770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:10.904726   80228 cri.go:89] found id: ""
	I0814 17:40:10.904757   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.904765   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:10.904771   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:10.904821   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:10.940549   80228 cri.go:89] found id: ""
	I0814 17:40:10.940578   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.940588   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:10.940598   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:10.940620   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:10.992592   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:10.992622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:11.006388   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:11.006412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:11.075455   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:11.075473   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:11.075486   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:11.156622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:11.156658   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:07.012878   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:09.013908   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.512592   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.444764   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.944931   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.259959   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.760823   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.695055   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:13.709460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:13.709531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:13.741941   80228 cri.go:89] found id: ""
	I0814 17:40:13.741967   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.741975   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:13.741981   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:13.742042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:13.773916   80228 cri.go:89] found id: ""
	I0814 17:40:13.773940   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.773947   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:13.773952   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:13.773999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:13.807871   80228 cri.go:89] found id: ""
	I0814 17:40:13.807902   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.807912   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:13.807918   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:13.807981   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:13.840902   80228 cri.go:89] found id: ""
	I0814 17:40:13.840931   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.840943   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:13.840952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:13.841018   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:13.871969   80228 cri.go:89] found id: ""
	I0814 17:40:13.871998   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.872010   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:13.872019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:13.872090   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:13.905502   80228 cri.go:89] found id: ""
	I0814 17:40:13.905524   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.905531   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:13.905537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:13.905599   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:13.937356   80228 cri.go:89] found id: ""
	I0814 17:40:13.937386   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.937396   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:13.937404   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:13.937466   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:13.972383   80228 cri.go:89] found id: ""
	I0814 17:40:13.972410   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.972418   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:13.972427   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:13.972448   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:14.022691   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:14.022717   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:14.035543   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:14.035567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:14.104869   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:14.104889   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:14.104905   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:14.182185   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:14.182221   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:13.513519   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.012958   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:15.945499   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:18.445122   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.259488   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:18.259706   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.259972   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.720519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:16.734323   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:16.734406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:16.769454   80228 cri.go:89] found id: ""
	I0814 17:40:16.769483   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.769493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:16.769501   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:16.769565   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:16.801513   80228 cri.go:89] found id: ""
	I0814 17:40:16.801541   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.801548   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:16.801554   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:16.801610   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:16.835184   80228 cri.go:89] found id: ""
	I0814 17:40:16.835212   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.835220   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:16.835226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:16.835275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:16.867162   80228 cri.go:89] found id: ""
	I0814 17:40:16.867192   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.867201   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:16.867207   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:16.867257   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:16.902912   80228 cri.go:89] found id: ""
	I0814 17:40:16.902942   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.902953   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:16.902961   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:16.903026   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:16.935004   80228 cri.go:89] found id: ""
	I0814 17:40:16.935033   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.935044   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:16.935052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:16.935115   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:16.969082   80228 cri.go:89] found id: ""
	I0814 17:40:16.969110   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.969120   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:16.969127   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:16.969194   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:17.002594   80228 cri.go:89] found id: ""
	I0814 17:40:17.002622   80228 logs.go:276] 0 containers: []
	W0814 17:40:17.002633   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:17.002644   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:17.002659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:17.054319   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:17.054359   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:17.068024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:17.068048   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:17.139480   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:17.139499   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:17.139514   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:17.222086   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:17.222140   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:19.758630   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:19.772186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:19.772254   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:19.807719   80228 cri.go:89] found id: ""
	I0814 17:40:19.807751   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.807760   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:19.807766   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:19.807830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:19.851023   80228 cri.go:89] found id: ""
	I0814 17:40:19.851054   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.851067   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:19.851083   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:19.851154   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:19.882961   80228 cri.go:89] found id: ""
	I0814 17:40:19.882987   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.882997   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:19.883005   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:19.883063   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:19.920312   80228 cri.go:89] found id: ""
	I0814 17:40:19.920345   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.920356   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:19.920365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:19.920430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:19.953628   80228 cri.go:89] found id: ""
	I0814 17:40:19.953658   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.953671   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:19.953683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:19.953741   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:19.984998   80228 cri.go:89] found id: ""
	I0814 17:40:19.985028   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.985036   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:19.985043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:19.985092   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:20.018728   80228 cri.go:89] found id: ""
	I0814 17:40:20.018753   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.018761   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:20.018766   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:20.018814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:20.050718   80228 cri.go:89] found id: ""
	I0814 17:40:20.050743   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.050757   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:20.050765   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:20.050777   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:20.101567   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:20.101602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:20.114890   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:20.114920   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:20.183926   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:20.183948   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:20.183960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:20.270195   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:20.270223   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:18.513348   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.513633   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.445352   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.945704   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.260365   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:24.760475   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.807078   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:22.820187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:22.820260   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:22.852474   80228 cri.go:89] found id: ""
	I0814 17:40:22.852504   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.852514   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:22.852522   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:22.852596   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:22.887141   80228 cri.go:89] found id: ""
	I0814 17:40:22.887167   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.887177   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:22.887184   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:22.887248   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:22.919384   80228 cri.go:89] found id: ""
	I0814 17:40:22.919417   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.919428   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:22.919436   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:22.919502   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:22.951877   80228 cri.go:89] found id: ""
	I0814 17:40:22.951897   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.951905   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:22.951910   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:22.951965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:22.987712   80228 cri.go:89] found id: ""
	I0814 17:40:22.987742   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.987752   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:22.987760   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:22.987832   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:23.025562   80228 cri.go:89] found id: ""
	I0814 17:40:23.025597   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.025608   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:23.025616   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:23.025680   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:23.058928   80228 cri.go:89] found id: ""
	I0814 17:40:23.058955   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.058962   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:23.058969   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:23.059025   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:23.096807   80228 cri.go:89] found id: ""
	I0814 17:40:23.096836   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.096847   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:23.096858   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:23.096874   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:23.148943   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:23.148977   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:23.161905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:23.161927   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:23.232119   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:23.232147   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:23.232160   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:23.320693   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:23.320731   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:25.858506   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:25.871891   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:25.871964   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:25.904732   80228 cri.go:89] found id: ""
	I0814 17:40:25.904760   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.904769   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:25.904775   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:25.904830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:25.936317   80228 cri.go:89] found id: ""
	I0814 17:40:25.936347   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.936358   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:25.936365   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:25.936427   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:25.969921   80228 cri.go:89] found id: ""
	I0814 17:40:25.969946   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.969954   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:25.969960   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:25.970009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:26.022832   80228 cri.go:89] found id: ""
	I0814 17:40:26.022862   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.022872   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:26.022880   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:26.022941   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:26.056178   80228 cri.go:89] found id: ""
	I0814 17:40:26.056206   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.056214   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:26.056224   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:26.056275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:26.086921   80228 cri.go:89] found id: ""
	I0814 17:40:26.086955   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.086966   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:26.086974   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:26.087031   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:26.120631   80228 cri.go:89] found id: ""
	I0814 17:40:26.120665   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.120677   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:26.120686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:26.120745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:26.154258   80228 cri.go:89] found id: ""
	I0814 17:40:26.154289   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.154300   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:26.154310   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:26.154324   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:26.208366   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:26.208405   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:26.222160   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:26.222192   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:26.294737   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:26.294756   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:26.294768   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:22.513813   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:25.013707   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:25.444691   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:27.944277   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.945043   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:27.260184   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.262080   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:26.372870   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:26.372906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:28.908165   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:28.920754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:28.920816   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:28.953950   80228 cri.go:89] found id: ""
	I0814 17:40:28.953971   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.953978   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:28.953987   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:28.954035   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:28.985228   80228 cri.go:89] found id: ""
	I0814 17:40:28.985266   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.985278   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:28.985286   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:28.985347   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:29.016295   80228 cri.go:89] found id: ""
	I0814 17:40:29.016328   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.016336   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:29.016341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:29.016392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:29.048664   80228 cri.go:89] found id: ""
	I0814 17:40:29.048696   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.048707   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:29.048715   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:29.048778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:29.080441   80228 cri.go:89] found id: ""
	I0814 17:40:29.080466   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.080474   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:29.080520   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:29.080584   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:29.112377   80228 cri.go:89] found id: ""
	I0814 17:40:29.112407   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.112418   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:29.112426   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:29.112493   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:29.145368   80228 cri.go:89] found id: ""
	I0814 17:40:29.145395   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.145403   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:29.145409   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:29.145471   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:29.177305   80228 cri.go:89] found id: ""
	I0814 17:40:29.177333   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.177341   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:29.177350   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:29.177366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:29.232156   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:29.232197   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:29.245286   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:29.245317   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:29.322257   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:29.322286   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:29.322302   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:29.397679   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:29.397714   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:27.512862   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.514756   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.945087   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.444743   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.760242   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.259825   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.935264   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:31.948380   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:31.948446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:31.978898   80228 cri.go:89] found id: ""
	I0814 17:40:31.978925   80228 logs.go:276] 0 containers: []
	W0814 17:40:31.978932   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:31.978939   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:31.978989   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:32.010652   80228 cri.go:89] found id: ""
	I0814 17:40:32.010681   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.010692   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:32.010699   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:32.010767   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:32.044821   80228 cri.go:89] found id: ""
	I0814 17:40:32.044852   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.044860   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:32.044866   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:32.044915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:32.076359   80228 cri.go:89] found id: ""
	I0814 17:40:32.076388   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.076398   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:32.076406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:32.076469   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:32.107652   80228 cri.go:89] found id: ""
	I0814 17:40:32.107680   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.107692   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:32.107709   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:32.107770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:32.138445   80228 cri.go:89] found id: ""
	I0814 17:40:32.138473   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.138484   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:32.138492   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:32.138558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:32.173771   80228 cri.go:89] found id: ""
	I0814 17:40:32.173794   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.173802   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:32.173807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:32.173857   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:32.206387   80228 cri.go:89] found id: ""
	I0814 17:40:32.206418   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.206429   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:32.206441   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:32.206454   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.258114   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:32.258148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:32.271984   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:32.272009   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:32.335423   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:32.335447   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:32.335464   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:32.411155   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:32.411206   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:34.975280   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:34.988098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:34.988176   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:35.022020   80228 cri.go:89] found id: ""
	I0814 17:40:35.022047   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.022062   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:35.022071   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:35.022124   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:35.055528   80228 cri.go:89] found id: ""
	I0814 17:40:35.055568   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.055578   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:35.055586   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:35.055647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:35.088373   80228 cri.go:89] found id: ""
	I0814 17:40:35.088404   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.088415   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:35.088422   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:35.088489   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:35.123162   80228 cri.go:89] found id: ""
	I0814 17:40:35.123188   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.123198   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:35.123206   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:35.123268   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:35.160240   80228 cri.go:89] found id: ""
	I0814 17:40:35.160267   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.160277   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:35.160286   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:35.160348   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:35.196249   80228 cri.go:89] found id: ""
	I0814 17:40:35.196276   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.196285   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:35.196293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:35.196359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:35.232564   80228 cri.go:89] found id: ""
	I0814 17:40:35.232588   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.232598   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:35.232606   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:35.232671   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:35.267357   80228 cri.go:89] found id: ""
	I0814 17:40:35.267383   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.267392   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:35.267399   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:35.267412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:35.279779   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:35.279806   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:35.347748   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:35.347769   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:35.347782   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:35.427900   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:35.427932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:35.468925   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:35.468953   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.013942   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.513138   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:36.944749   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.444665   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:36.760292   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.260430   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:38.020581   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:38.034985   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:38.035066   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:38.070206   80228 cri.go:89] found id: ""
	I0814 17:40:38.070231   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.070240   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:38.070246   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:38.070294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:38.103859   80228 cri.go:89] found id: ""
	I0814 17:40:38.103885   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.103893   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:38.103898   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:38.103947   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:38.138247   80228 cri.go:89] found id: ""
	I0814 17:40:38.138271   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.138278   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:38.138285   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:38.138345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:38.179475   80228 cri.go:89] found id: ""
	I0814 17:40:38.179511   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.179520   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:38.179526   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:38.179578   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:38.224892   80228 cri.go:89] found id: ""
	I0814 17:40:38.224922   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.224932   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:38.224940   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:38.224996   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:38.270456   80228 cri.go:89] found id: ""
	I0814 17:40:38.270485   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.270497   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:38.270504   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:38.270569   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:38.305267   80228 cri.go:89] found id: ""
	I0814 17:40:38.305300   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.305308   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:38.305315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:38.305387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:38.336942   80228 cri.go:89] found id: ""
	I0814 17:40:38.336978   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.336989   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:38.337000   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:38.337016   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:38.388618   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:38.388651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:38.403442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:38.403472   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:38.478225   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:38.478256   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:38.478273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:38.553400   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:38.553440   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:41.089947   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:41.101989   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:41.102070   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:41.133743   80228 cri.go:89] found id: ""
	I0814 17:40:41.133767   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.133774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:41.133780   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:41.133828   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:41.169671   80228 cri.go:89] found id: ""
	I0814 17:40:41.169706   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.169714   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:41.169721   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:41.169773   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:41.203425   80228 cri.go:89] found id: ""
	I0814 17:40:41.203451   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.203459   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:41.203475   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:41.203534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:41.237031   80228 cri.go:89] found id: ""
	I0814 17:40:41.237064   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.237075   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:41.237084   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:41.237149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:41.271095   80228 cri.go:89] found id: ""
	I0814 17:40:41.271120   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.271128   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:41.271134   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:41.271190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:41.303640   80228 cri.go:89] found id: ""
	I0814 17:40:41.303672   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.303684   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:41.303692   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:41.303755   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:37.013555   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.013733   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.013910   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.943472   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:43.944582   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.261795   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:43.759672   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.336010   80228 cri.go:89] found id: ""
	I0814 17:40:41.336047   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.336062   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:41.336071   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:41.336140   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:41.370098   80228 cri.go:89] found id: ""
	I0814 17:40:41.370133   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.370143   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:41.370154   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:41.370168   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:41.420760   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:41.420794   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:41.433651   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:41.433678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:41.506623   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:41.506644   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:41.506657   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:41.591390   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:41.591426   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:44.130649   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:44.144362   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:44.144428   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:44.178485   80228 cri.go:89] found id: ""
	I0814 17:40:44.178516   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.178527   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:44.178535   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:44.178600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:44.214231   80228 cri.go:89] found id: ""
	I0814 17:40:44.214260   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.214268   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:44.214274   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:44.214336   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:44.248483   80228 cri.go:89] found id: ""
	I0814 17:40:44.248513   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.248524   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:44.248531   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:44.248600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:44.282445   80228 cri.go:89] found id: ""
	I0814 17:40:44.282472   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.282481   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:44.282493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:44.282560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:44.315141   80228 cri.go:89] found id: ""
	I0814 17:40:44.315169   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.315190   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:44.315198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:44.315259   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:44.346756   80228 cri.go:89] found id: ""
	I0814 17:40:44.346781   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.346789   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:44.346795   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:44.346853   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:44.378143   80228 cri.go:89] found id: ""
	I0814 17:40:44.378172   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.378183   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:44.378191   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:44.378255   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:44.411526   80228 cri.go:89] found id: ""
	I0814 17:40:44.411557   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.411567   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:44.411578   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:44.411592   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:44.459873   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:44.459913   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:44.473112   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:44.473148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:44.547514   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:44.547546   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:44.547579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:44.630377   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:44.630415   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:43.512113   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.512590   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.945080   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:47.946506   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.760626   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:48.260015   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.260186   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:47.173094   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:47.185854   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:47.185927   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:47.228755   80228 cri.go:89] found id: ""
	I0814 17:40:47.228781   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.228788   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:47.228795   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:47.228851   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:47.264986   80228 cri.go:89] found id: ""
	I0814 17:40:47.265020   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.265031   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:47.265037   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:47.265100   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:47.296900   80228 cri.go:89] found id: ""
	I0814 17:40:47.296929   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.296940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:47.296947   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:47.297009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:47.328120   80228 cri.go:89] found id: ""
	I0814 17:40:47.328147   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.328155   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:47.328161   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:47.328210   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:47.364147   80228 cri.go:89] found id: ""
	I0814 17:40:47.364171   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.364178   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:47.364184   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:47.364238   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:47.400466   80228 cri.go:89] found id: ""
	I0814 17:40:47.400493   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.400501   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:47.400507   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:47.400562   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:47.432681   80228 cri.go:89] found id: ""
	I0814 17:40:47.432713   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.432724   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:47.432732   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:47.432801   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:47.465466   80228 cri.go:89] found id: ""
	I0814 17:40:47.465498   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.465510   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:47.465522   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:47.465536   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:47.502076   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:47.502114   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:47.554451   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:47.554488   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:47.567658   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:47.567690   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:47.635805   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:47.635829   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:47.635844   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:50.215353   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:50.227723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:50.227795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:50.258250   80228 cri.go:89] found id: ""
	I0814 17:40:50.258276   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.258287   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:50.258296   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:50.258363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:50.291371   80228 cri.go:89] found id: ""
	I0814 17:40:50.291406   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.291416   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:50.291423   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:50.291479   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:50.321449   80228 cri.go:89] found id: ""
	I0814 17:40:50.321473   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.321481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:50.321486   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:50.321545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:50.351752   80228 cri.go:89] found id: ""
	I0814 17:40:50.351780   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.351791   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:50.351799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:50.351856   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:50.382022   80228 cri.go:89] found id: ""
	I0814 17:40:50.382050   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.382057   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:50.382063   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:50.382118   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:50.414057   80228 cri.go:89] found id: ""
	I0814 17:40:50.414083   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.414091   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:50.414098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:50.414156   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:50.447508   80228 cri.go:89] found id: ""
	I0814 17:40:50.447530   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.447537   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:50.447543   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:50.447606   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:50.487401   80228 cri.go:89] found id: ""
	I0814 17:40:50.487425   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.487434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:50.487442   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:50.487455   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:50.524404   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:50.524439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:50.578220   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:50.578256   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:50.591405   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:50.591431   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:50.657727   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:50.657750   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:50.657762   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:47.514490   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.012588   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.445363   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:52.944903   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:52.760728   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:54.760918   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:53.237985   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:53.250502   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:53.250572   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:53.285728   80228 cri.go:89] found id: ""
	I0814 17:40:53.285763   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.285774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:53.285784   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:53.285848   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:53.318195   80228 cri.go:89] found id: ""
	I0814 17:40:53.318231   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.318243   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:53.318252   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:53.318317   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:53.350259   80228 cri.go:89] found id: ""
	I0814 17:40:53.350291   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.350302   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:53.350310   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:53.350385   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:53.385894   80228 cri.go:89] found id: ""
	I0814 17:40:53.385920   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.385928   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:53.385934   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:53.385983   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:53.420851   80228 cri.go:89] found id: ""
	I0814 17:40:53.420878   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.420890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:53.420897   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:53.420963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:53.458332   80228 cri.go:89] found id: ""
	I0814 17:40:53.458370   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.458381   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:53.458392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:53.458465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:53.489719   80228 cri.go:89] found id: ""
	I0814 17:40:53.489750   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.489759   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:53.489765   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:53.489820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:53.522942   80228 cri.go:89] found id: ""
	I0814 17:40:53.522977   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.522988   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:53.522998   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:53.523013   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:53.599450   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:53.599492   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:53.637225   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:53.637254   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:53.688605   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:53.688647   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:53.704601   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:53.704633   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:53.775046   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.275201   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:56.288406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:56.288463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:52.013747   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:54.513735   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:56.514335   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:55.445462   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:57.447142   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:59.946025   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:57.261047   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:59.760136   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:56.322862   80228 cri.go:89] found id: ""
	I0814 17:40:56.322891   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.322899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:56.322905   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:56.322954   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:56.356214   80228 cri.go:89] found id: ""
	I0814 17:40:56.356243   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.356262   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:56.356268   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:56.356338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:56.388877   80228 cri.go:89] found id: ""
	I0814 17:40:56.388900   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.388909   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:56.388915   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:56.388967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:56.422552   80228 cri.go:89] found id: ""
	I0814 17:40:56.422577   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.422585   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:56.422590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:56.422649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:56.456995   80228 cri.go:89] found id: ""
	I0814 17:40:56.457018   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.457026   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:56.457031   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:56.457079   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:56.495745   80228 cri.go:89] found id: ""
	I0814 17:40:56.495772   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.495788   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:56.495797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:56.495868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:56.529139   80228 cri.go:89] found id: ""
	I0814 17:40:56.529171   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.529179   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:56.529185   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:56.529237   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:56.561377   80228 cri.go:89] found id: ""
	I0814 17:40:56.561406   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.561414   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:56.561424   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:56.561439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:56.601504   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:56.601537   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:56.653369   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:56.653403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:56.666117   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:56.666144   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:56.731921   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.731949   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:56.731963   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.315712   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:59.328425   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:59.328486   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:59.364056   80228 cri.go:89] found id: ""
	I0814 17:40:59.364080   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.364088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:59.364094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:59.364151   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:59.398948   80228 cri.go:89] found id: ""
	I0814 17:40:59.398971   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.398978   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:59.398984   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:59.399029   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:59.430301   80228 cri.go:89] found id: ""
	I0814 17:40:59.430327   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.430335   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:59.430341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:59.430406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:59.465278   80228 cri.go:89] found id: ""
	I0814 17:40:59.465301   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.465309   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:59.465315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:59.465372   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:59.497544   80228 cri.go:89] found id: ""
	I0814 17:40:59.497575   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.497586   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:59.497595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:59.497659   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:59.529463   80228 cri.go:89] found id: ""
	I0814 17:40:59.529494   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.529506   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:59.529513   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:59.529587   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:59.562448   80228 cri.go:89] found id: ""
	I0814 17:40:59.562477   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.562487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:59.562495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:59.562609   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:59.594059   80228 cri.go:89] found id: ""
	I0814 17:40:59.594089   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.594103   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:59.594112   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:59.594123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.672139   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:59.672172   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:59.710714   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:59.710743   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:59.762645   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:59.762676   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:59.776006   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:59.776033   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:59.838187   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:59.013030   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:01.013280   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.445595   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:04.944484   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.260244   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:04.760862   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.338964   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:02.351381   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:02.351460   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:02.383206   80228 cri.go:89] found id: ""
	I0814 17:41:02.383235   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.383244   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:02.383250   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:02.383310   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:02.417016   80228 cri.go:89] found id: ""
	I0814 17:41:02.417042   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.417049   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:02.417055   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:02.417111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:02.451936   80228 cri.go:89] found id: ""
	I0814 17:41:02.451964   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.451974   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:02.451982   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:02.452042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:02.489896   80228 cri.go:89] found id: ""
	I0814 17:41:02.489927   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.489937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:02.489945   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:02.490011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:02.524273   80228 cri.go:89] found id: ""
	I0814 17:41:02.524308   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.524339   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:02.524346   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:02.524409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:02.558813   80228 cri.go:89] found id: ""
	I0814 17:41:02.558842   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.558850   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:02.558861   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:02.558917   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:02.592704   80228 cri.go:89] found id: ""
	I0814 17:41:02.592733   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.592747   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:02.592753   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:02.592818   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:02.625250   80228 cri.go:89] found id: ""
	I0814 17:41:02.625277   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.625288   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:02.625299   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:02.625312   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:02.677577   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:02.677613   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:02.691407   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:02.691439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:02.756797   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:02.756869   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:02.756888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:02.830803   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:02.830842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.370085   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:05.385272   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:05.385342   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:05.421775   80228 cri.go:89] found id: ""
	I0814 17:41:05.421799   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.421806   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:05.421812   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:05.421860   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:05.457054   80228 cri.go:89] found id: ""
	I0814 17:41:05.457083   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.457093   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:05.457100   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:05.457153   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:05.489290   80228 cri.go:89] found id: ""
	I0814 17:41:05.489330   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.489338   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:05.489345   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:05.489392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:05.527066   80228 cri.go:89] found id: ""
	I0814 17:41:05.527091   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.527098   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:05.527105   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:05.527155   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:05.563882   80228 cri.go:89] found id: ""
	I0814 17:41:05.563915   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.563925   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:05.563931   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:05.563982   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:05.601837   80228 cri.go:89] found id: ""
	I0814 17:41:05.601863   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.601871   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:05.601879   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:05.601940   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:05.633503   80228 cri.go:89] found id: ""
	I0814 17:41:05.633531   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.633539   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:05.633545   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:05.633615   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:05.668281   80228 cri.go:89] found id: ""
	I0814 17:41:05.668312   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.668324   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:05.668335   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:05.668349   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:05.747214   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:05.747249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.784408   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:05.784441   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:05.835067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:05.835103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:05.847938   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:05.847966   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:05.917404   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:03.513033   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:05.514476   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:06.944595   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:08.944850   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:07.260430   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:09.762513   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:08.417559   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:08.431092   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:08.431165   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:08.465357   80228 cri.go:89] found id: ""
	I0814 17:41:08.465515   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.465543   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:08.465560   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:08.465675   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:08.499085   80228 cri.go:89] found id: ""
	I0814 17:41:08.499114   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.499123   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:08.499129   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:08.499180   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:08.533881   80228 cri.go:89] found id: ""
	I0814 17:41:08.533909   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.533917   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:08.533922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:08.533972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:08.570503   80228 cri.go:89] found id: ""
	I0814 17:41:08.570549   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.570560   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:08.570572   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:08.570649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:08.602557   80228 cri.go:89] found id: ""
	I0814 17:41:08.602599   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.602610   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:08.602691   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:08.602785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:08.636174   80228 cri.go:89] found id: ""
	I0814 17:41:08.636199   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.636206   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:08.636213   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:08.636261   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:08.672774   80228 cri.go:89] found id: ""
	I0814 17:41:08.672804   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.672815   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:08.672823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:08.672890   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:08.705535   80228 cri.go:89] found id: ""
	I0814 17:41:08.705590   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.705605   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:08.705622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:08.705641   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:08.744315   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:08.744341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:08.794632   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:08.794666   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:08.808089   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:08.808117   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:08.876417   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:08.876436   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:08.876452   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:08.013688   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:10.512639   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:11.444206   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:13.944056   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:12.260065   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:14.759640   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:11.458562   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:11.470905   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:11.470965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:11.505992   80228 cri.go:89] found id: ""
	I0814 17:41:11.506023   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.506036   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:11.506044   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:11.506112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:11.540893   80228 cri.go:89] found id: ""
	I0814 17:41:11.540922   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.540932   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:11.540945   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:11.541001   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:11.575423   80228 cri.go:89] found id: ""
	I0814 17:41:11.575448   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.575455   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:11.575462   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:11.575520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:11.608126   80228 cri.go:89] found id: ""
	I0814 17:41:11.608155   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.608164   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:11.608171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:11.608222   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:11.640165   80228 cri.go:89] found id: ""
	I0814 17:41:11.640190   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.640198   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:11.640204   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:11.640263   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:11.674425   80228 cri.go:89] found id: ""
	I0814 17:41:11.674446   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.674455   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:11.674460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:11.674513   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:11.707448   80228 cri.go:89] found id: ""
	I0814 17:41:11.707477   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.707487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:11.707493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:11.707555   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:11.744309   80228 cri.go:89] found id: ""
	I0814 17:41:11.744338   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.744346   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:11.744363   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:11.744375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:11.824165   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:11.824196   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:11.862013   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:11.862039   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:11.913862   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:11.913902   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:11.927147   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:11.927178   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:11.998403   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.498590   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:14.512847   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:14.512938   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:14.549255   80228 cri.go:89] found id: ""
	I0814 17:41:14.549288   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.549306   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:14.549316   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:14.549382   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:14.588917   80228 cri.go:89] found id: ""
	I0814 17:41:14.588948   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.588956   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:14.588963   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:14.589012   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:14.622581   80228 cri.go:89] found id: ""
	I0814 17:41:14.622611   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.622621   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:14.622628   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:14.622693   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:14.656029   80228 cri.go:89] found id: ""
	I0814 17:41:14.656056   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.656064   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:14.656070   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:14.656117   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:14.687502   80228 cri.go:89] found id: ""
	I0814 17:41:14.687527   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.687536   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:14.687541   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:14.687614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:14.720682   80228 cri.go:89] found id: ""
	I0814 17:41:14.720713   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.720721   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:14.720728   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:14.720778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:14.752482   80228 cri.go:89] found id: ""
	I0814 17:41:14.752511   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.752520   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:14.752525   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:14.752577   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:14.792980   80228 cri.go:89] found id: ""
	I0814 17:41:14.793004   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.793014   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:14.793026   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:14.793042   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:14.845259   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:14.845297   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:14.858530   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:14.858556   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:14.931025   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.931054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:14.931067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:15.008081   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:15.008115   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:13.014174   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:15.512768   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:16.444772   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:16.444802   79521 pod_ready.go:81] duration metric: took 4m0.006448573s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	E0814 17:41:16.444810   79521 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 17:41:16.444817   79521 pod_ready.go:38] duration metric: took 4m5.044051569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:41:16.444832   79521 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:41:16.444858   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:16.444901   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:16.499710   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:16.499742   79521 cri.go:89] found id: ""
	I0814 17:41:16.499751   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:16.499815   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.504467   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:16.504544   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:16.546815   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:16.546842   79521 cri.go:89] found id: ""
	I0814 17:41:16.546851   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:16.546905   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.550917   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:16.550986   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:16.590195   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:16.590216   79521 cri.go:89] found id: ""
	I0814 17:41:16.590224   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:16.590267   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.594123   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:16.594196   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:16.631058   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:16.631091   79521 cri.go:89] found id: ""
	I0814 17:41:16.631101   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:16.631163   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.635151   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:16.635226   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:16.671555   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:16.671582   79521 cri.go:89] found id: ""
	I0814 17:41:16.671592   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:16.671657   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.675790   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:16.675847   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:16.713131   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:16.713157   79521 cri.go:89] found id: ""
	I0814 17:41:16.713165   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:16.713217   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.717296   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:16.717354   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:16.756212   79521 cri.go:89] found id: ""
	I0814 17:41:16.756245   79521 logs.go:276] 0 containers: []
	W0814 17:41:16.756255   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:16.756261   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:16.756324   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:16.802379   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:16.802411   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:16.802417   79521 cri.go:89] found id: ""
	I0814 17:41:16.802431   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:16.802492   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.807105   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.811210   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:16.811241   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:16.852490   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:16.852526   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:16.894384   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:16.894425   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:16.929919   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:16.929949   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:16.965031   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:16.965061   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:17.468878   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.468945   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.482799   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.482826   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:17.610874   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:17.610904   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:17.649292   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:17.649322   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:17.691014   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:17.691045   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:17.749218   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.749254   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.794240   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.794280   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.868805   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:17.868851   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:16.760328   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:18.760369   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:17.544873   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:17.557699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:17.557791   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:17.600314   80228 cri.go:89] found id: ""
	I0814 17:41:17.600347   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.600360   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:17.600370   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:17.600441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:17.634873   80228 cri.go:89] found id: ""
	I0814 17:41:17.634902   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.634914   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:17.634923   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:17.634986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:17.670521   80228 cri.go:89] found id: ""
	I0814 17:41:17.670552   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.670563   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:17.670571   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:17.670647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:17.705587   80228 cri.go:89] found id: ""
	I0814 17:41:17.705612   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.705626   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:17.705632   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:17.705682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:17.768178   80228 cri.go:89] found id: ""
	I0814 17:41:17.768207   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.768218   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:17.768226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:17.768290   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:17.804692   80228 cri.go:89] found id: ""
	I0814 17:41:17.804721   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.804729   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:17.804735   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:17.804795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:17.847994   80228 cri.go:89] found id: ""
	I0814 17:41:17.848030   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.848041   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:17.848052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:17.848122   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:17.883905   80228 cri.go:89] found id: ""
	I0814 17:41:17.883935   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.883944   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:17.883953   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.883965   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.931481   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.931522   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.983315   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.983363   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.996941   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.996981   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:18.067254   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:18.067279   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:18.067295   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:20.642099   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.655941   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.656014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.692525   80228 cri.go:89] found id: ""
	I0814 17:41:20.692554   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.692565   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:20.692577   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.692634   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.727721   80228 cri.go:89] found id: ""
	I0814 17:41:20.727755   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.727769   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:20.727778   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.727845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.770441   80228 cri.go:89] found id: ""
	I0814 17:41:20.770471   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.770481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:20.770488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.770550   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.807932   80228 cri.go:89] found id: ""
	I0814 17:41:20.807961   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.807968   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:20.807975   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.808030   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.849919   80228 cri.go:89] found id: ""
	I0814 17:41:20.849944   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.849963   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:20.849970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.850045   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.887351   80228 cri.go:89] found id: ""
	I0814 17:41:20.887382   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.887393   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:20.887401   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.887465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.921284   80228 cri.go:89] found id: ""
	I0814 17:41:20.921310   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.921321   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.921328   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:20.921409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:20.955238   80228 cri.go:89] found id: ""
	I0814 17:41:20.955267   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.955278   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:20.955288   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.955314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:21.024544   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:21.024565   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.024579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:21.103987   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.104019   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.145515   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.145550   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.197307   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.197346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.514682   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:20.015152   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:20.429364   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.445075   79521 api_server.go:72] duration metric: took 4m16.759338748s to wait for apiserver process to appear ...
	I0814 17:41:20.445102   79521 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:41:20.445133   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.445179   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.477630   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:20.477655   79521 cri.go:89] found id: ""
	I0814 17:41:20.477663   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:20.477714   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.481667   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.481728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.514443   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:20.514465   79521 cri.go:89] found id: ""
	I0814 17:41:20.514473   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:20.514516   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.518344   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.518401   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.559625   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:20.559647   79521 cri.go:89] found id: ""
	I0814 17:41:20.559653   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:20.559706   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.564137   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.564203   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.603504   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:20.603531   79521 cri.go:89] found id: ""
	I0814 17:41:20.603540   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:20.603602   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.608260   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.608334   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.641466   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:20.641487   79521 cri.go:89] found id: ""
	I0814 17:41:20.641494   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:20.641538   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.645566   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.645625   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.685003   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:20.685032   79521 cri.go:89] found id: ""
	I0814 17:41:20.685042   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:20.685104   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.690347   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.690429   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.733753   79521 cri.go:89] found id: ""
	I0814 17:41:20.733782   79521 logs.go:276] 0 containers: []
	W0814 17:41:20.733793   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.733800   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:20.733862   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:20.781659   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:20.781683   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:20.781689   79521 cri.go:89] found id: ""
	I0814 17:41:20.781697   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:20.781753   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.786293   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.790358   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.790377   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:20.916473   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:20.916513   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:20.968706   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:20.968743   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:21.003507   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:21.003546   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:21.049909   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:21.049961   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:21.090052   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:21.090080   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:21.129551   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.129585   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.174792   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.174828   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.247392   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.247440   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:21.261095   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:21.261129   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:21.306583   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:21.306616   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:21.339602   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:21.339642   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:21.397695   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.397732   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.301807   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:41:24.306392   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 200:
	ok
	I0814 17:41:24.307364   79521 api_server.go:141] control plane version: v1.31.0
	I0814 17:41:24.307390   79521 api_server.go:131] duration metric: took 3.862280551s to wait for apiserver health ...
	I0814 17:41:24.307398   79521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:41:24.307418   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:24.307463   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:24.342519   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:24.342552   79521 cri.go:89] found id: ""
	I0814 17:41:24.342561   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:24.342627   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.346361   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:24.346422   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:24.386973   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:24.387001   79521 cri.go:89] found id: ""
	I0814 17:41:24.387012   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:24.387066   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.390942   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:24.390999   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:24.426841   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:24.426863   79521 cri.go:89] found id: ""
	I0814 17:41:24.426872   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:24.426927   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.430856   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:24.430917   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:24.467024   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:24.467050   79521 cri.go:89] found id: ""
	I0814 17:41:24.467059   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:24.467117   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.471659   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:24.471728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:24.506759   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:24.506786   79521 cri.go:89] found id: ""
	I0814 17:41:24.506799   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:24.506857   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.511660   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:24.511728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:24.547768   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:24.547795   79521 cri.go:89] found id: ""
	I0814 17:41:24.547805   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:24.547862   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.552881   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:24.552941   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:24.588519   79521 cri.go:89] found id: ""
	I0814 17:41:24.588544   79521 logs.go:276] 0 containers: []
	W0814 17:41:24.588551   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:24.588557   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:24.588602   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:24.624604   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:24.624626   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:24.624630   79521 cri.go:89] found id: ""
	I0814 17:41:24.624636   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:24.624691   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.628703   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.632611   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:24.632636   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:24.671903   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:24.671935   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:24.709821   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.709851   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:25.107477   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:25.107515   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:25.221012   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:25.221041   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:20.760924   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:23.259780   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.260347   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:23.712584   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:23.726467   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:23.726545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:23.762871   80228 cri.go:89] found id: ""
	I0814 17:41:23.762906   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.762916   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:23.762922   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:23.762972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:23.800068   80228 cri.go:89] found id: ""
	I0814 17:41:23.800096   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.800105   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:23.800113   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:23.800173   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:23.834913   80228 cri.go:89] found id: ""
	I0814 17:41:23.834945   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.834956   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:23.834963   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:23.835022   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:23.871196   80228 cri.go:89] found id: ""
	I0814 17:41:23.871222   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.871233   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:23.871240   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:23.871294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:23.907830   80228 cri.go:89] found id: ""
	I0814 17:41:23.907854   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.907862   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:23.907868   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:23.907926   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:23.941110   80228 cri.go:89] found id: ""
	I0814 17:41:23.941133   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.941141   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:23.941146   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:23.941197   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:23.973602   80228 cri.go:89] found id: ""
	I0814 17:41:23.973631   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.973649   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:23.973655   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:23.973710   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:24.007398   80228 cri.go:89] found id: ""
	I0814 17:41:24.007436   80228 logs.go:276] 0 containers: []
	W0814 17:41:24.007450   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:24.007462   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:24.007478   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:24.061830   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:24.061867   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:24.075012   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:24.075046   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:24.148666   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:24.148692   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.148703   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.230208   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:24.230248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:22.513616   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.013383   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.272397   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:25.272429   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:25.317574   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:25.317603   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:25.352239   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:25.352271   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:25.409997   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:25.410030   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:25.443875   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:25.443899   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:25.490987   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:25.491023   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:25.563495   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:25.563531   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:25.577305   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:25.577345   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:28.147823   79521 system_pods.go:59] 8 kube-system pods found
	I0814 17:41:28.147855   79521 system_pods.go:61] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running
	I0814 17:41:28.147860   79521 system_pods.go:61] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running
	I0814 17:41:28.147864   79521 system_pods.go:61] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running
	I0814 17:41:28.147867   79521 system_pods.go:61] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running
	I0814 17:41:28.147870   79521 system_pods.go:61] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running
	I0814 17:41:28.147874   79521 system_pods.go:61] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running
	I0814 17:41:28.147879   79521 system_pods.go:61] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:41:28.147883   79521 system_pods.go:61] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running
	I0814 17:41:28.147890   79521 system_pods.go:74] duration metric: took 3.840486938s to wait for pod list to return data ...
	I0814 17:41:28.147898   79521 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:41:28.150377   79521 default_sa.go:45] found service account: "default"
	I0814 17:41:28.150398   79521 default_sa.go:55] duration metric: took 2.493777ms for default service account to be created ...
	I0814 17:41:28.150406   79521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:41:28.154470   79521 system_pods.go:86] 8 kube-system pods found
	I0814 17:41:28.154494   79521 system_pods.go:89] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running
	I0814 17:41:28.154500   79521 system_pods.go:89] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running
	I0814 17:41:28.154504   79521 system_pods.go:89] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running
	I0814 17:41:28.154510   79521 system_pods.go:89] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running
	I0814 17:41:28.154514   79521 system_pods.go:89] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running
	I0814 17:41:28.154519   79521 system_pods.go:89] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running
	I0814 17:41:28.154525   79521 system_pods.go:89] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:41:28.154530   79521 system_pods.go:89] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running
	I0814 17:41:28.154537   79521 system_pods.go:126] duration metric: took 4.125964ms to wait for k8s-apps to be running ...
	I0814 17:41:28.154544   79521 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:41:28.154585   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:28.170494   79521 system_svc.go:56] duration metric: took 15.940728ms WaitForService to wait for kubelet
	I0814 17:41:28.170524   79521 kubeadm.go:582] duration metric: took 4m24.484791018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:41:28.170545   79521 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:41:28.173368   79521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:41:28.173395   79521 node_conditions.go:123] node cpu capacity is 2
	I0814 17:41:28.173407   79521 node_conditions.go:105] duration metric: took 2.858344ms to run NodePressure ...
	I0814 17:41:28.173417   79521 start.go:241] waiting for startup goroutines ...
	I0814 17:41:28.173424   79521 start.go:246] waiting for cluster config update ...
	I0814 17:41:28.173435   79521 start.go:255] writing updated cluster config ...
	I0814 17:41:28.173730   79521 ssh_runner.go:195] Run: rm -f paused
	I0814 17:41:28.219460   79521 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:41:28.221461   79521 out.go:177] * Done! kubectl is now configured to use "embed-certs-309673" cluster and "default" namespace by default
	I0814 17:41:27.761580   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:30.260454   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:26.776204   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:26.789057   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:26.789132   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:26.822531   80228 cri.go:89] found id: ""
	I0814 17:41:26.822564   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.822575   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:26.822590   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:26.822651   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:26.855314   80228 cri.go:89] found id: ""
	I0814 17:41:26.855353   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.855365   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:26.855372   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:26.855434   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:26.889389   80228 cri.go:89] found id: ""
	I0814 17:41:26.889413   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.889421   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:26.889427   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:26.889485   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:26.925478   80228 cri.go:89] found id: ""
	I0814 17:41:26.925500   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.925508   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:26.925514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:26.925560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:26.957012   80228 cri.go:89] found id: ""
	I0814 17:41:26.957042   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.957053   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:26.957061   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:26.957114   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:26.989358   80228 cri.go:89] found id: ""
	I0814 17:41:26.989388   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.989399   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:26.989406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:26.989468   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:27.024761   80228 cri.go:89] found id: ""
	I0814 17:41:27.024786   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.024805   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:27.024830   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:27.024895   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:27.059172   80228 cri.go:89] found id: ""
	I0814 17:41:27.059204   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.059215   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:27.059226   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:27.059240   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.096123   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:27.096151   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:27.147689   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:27.147719   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:27.161454   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:27.161483   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:27.234644   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:27.234668   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:27.234680   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:29.817428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:29.831731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:29.831811   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:29.868531   80228 cri.go:89] found id: ""
	I0814 17:41:29.868567   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.868577   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:29.868585   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:29.868657   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:29.913578   80228 cri.go:89] found id: ""
	I0814 17:41:29.913602   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.913611   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:29.913617   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:29.913677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:29.963916   80228 cri.go:89] found id: ""
	I0814 17:41:29.963939   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.963946   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:29.963952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:29.964011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:30.016735   80228 cri.go:89] found id: ""
	I0814 17:41:30.016763   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.016773   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:30.016781   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:30.016841   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:30.048852   80228 cri.go:89] found id: ""
	I0814 17:41:30.048880   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.048890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:30.048898   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:30.048960   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:30.080291   80228 cri.go:89] found id: ""
	I0814 17:41:30.080324   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.080335   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:30.080343   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:30.080506   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:30.113876   80228 cri.go:89] found id: ""
	I0814 17:41:30.113904   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.113914   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:30.113921   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:30.113984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:30.147568   80228 cri.go:89] found id: ""
	I0814 17:41:30.147594   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.147604   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:30.147614   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:30.147627   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:30.197596   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:30.197630   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:30.210576   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:30.210602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:30.277711   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:30.277731   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:30.277746   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:30.356556   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:30.356590   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.013699   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:29.014020   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:31.512974   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:32.760328   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:35.254066   79871 pod_ready.go:81] duration metric: took 4m0.000392709s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" ...
	E0814 17:41:35.254095   79871 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 17:41:35.254112   79871 pod_ready.go:38] duration metric: took 4m12.044429915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:41:35.254137   79871 kubeadm.go:597] duration metric: took 4m20.041916203s to restartPrimaryControlPlane
	W0814 17:41:35.254189   79871 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:35.254218   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:32.892697   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:32.909435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:32.909497   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:32.945055   80228 cri.go:89] found id: ""
	I0814 17:41:32.945080   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.945088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:32.945094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:32.945150   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:32.979266   80228 cri.go:89] found id: ""
	I0814 17:41:32.979294   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.979305   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:32.979312   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:32.979398   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:33.014260   80228 cri.go:89] found id: ""
	I0814 17:41:33.014286   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.014294   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:33.014299   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:33.014351   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:33.047590   80228 cri.go:89] found id: ""
	I0814 17:41:33.047622   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.047633   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:33.047646   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:33.047711   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:33.081258   80228 cri.go:89] found id: ""
	I0814 17:41:33.081294   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.081328   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:33.081337   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:33.081403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:33.112209   80228 cri.go:89] found id: ""
	I0814 17:41:33.112237   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.112247   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:33.112254   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:33.112318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:33.143854   80228 cri.go:89] found id: ""
	I0814 17:41:33.143892   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.143904   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:33.143913   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:33.143977   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:33.175147   80228 cri.go:89] found id: ""
	I0814 17:41:33.175190   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.175201   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:33.175212   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:33.175226   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:33.212877   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:33.212908   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:33.268067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:33.268103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:33.281357   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:33.281386   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:33.350233   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:33.350257   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:33.350269   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:35.929498   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:35.942290   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:35.942354   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:35.975782   80228 cri.go:89] found id: ""
	I0814 17:41:35.975809   80228 logs.go:276] 0 containers: []
	W0814 17:41:35.975818   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:35.975826   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:35.975886   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:36.008165   80228 cri.go:89] found id: ""
	I0814 17:41:36.008191   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.008200   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:36.008206   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:36.008262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:36.044912   80228 cri.go:89] found id: ""
	I0814 17:41:36.044937   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.044945   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:36.044954   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:36.045002   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:36.078068   80228 cri.go:89] found id: ""
	I0814 17:41:36.078096   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.078108   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:36.078116   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:36.078179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:36.110429   80228 cri.go:89] found id: ""
	I0814 17:41:36.110456   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.110467   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:36.110480   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:36.110540   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:36.142086   80228 cri.go:89] found id: ""
	I0814 17:41:36.142111   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.142119   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:36.142125   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:36.142186   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:36.172738   80228 cri.go:89] found id: ""
	I0814 17:41:36.172761   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.172769   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:36.172775   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:36.172831   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:36.204345   80228 cri.go:89] found id: ""
	I0814 17:41:36.204368   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.204376   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:36.204388   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:36.204403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:36.216667   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:36.216689   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:36.279509   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:36.279528   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:36.279540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:33.513591   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:36.013400   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:36.360411   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:36.360447   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:36.398193   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:36.398230   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:38.952415   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:38.968484   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:38.968554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:39.002450   80228 cri.go:89] found id: ""
	I0814 17:41:39.002479   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.002486   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:39.002493   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:39.002551   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:39.035840   80228 cri.go:89] found id: ""
	I0814 17:41:39.035868   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.035876   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:39.035882   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:39.035934   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:39.069900   80228 cri.go:89] found id: ""
	I0814 17:41:39.069929   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.069940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:39.069946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:39.069999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:39.104657   80228 cri.go:89] found id: ""
	I0814 17:41:39.104681   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.104689   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:39.104695   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:39.104751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:39.137279   80228 cri.go:89] found id: ""
	I0814 17:41:39.137312   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.137322   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:39.137330   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:39.137403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:39.170377   80228 cri.go:89] found id: ""
	I0814 17:41:39.170414   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.170424   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:39.170430   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:39.170491   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:39.205742   80228 cri.go:89] found id: ""
	I0814 17:41:39.205779   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.205790   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:39.205796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:39.205850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:39.239954   80228 cri.go:89] found id: ""
	I0814 17:41:39.239979   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.239987   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:39.239994   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:39.240011   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:39.276587   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:39.276619   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:39.329286   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:39.329322   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:39.342232   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:39.342257   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:39.411043   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:39.411063   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:39.411075   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:38.013562   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:40.013740   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:41.994479   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:42.007736   80228 kubeadm.go:597] duration metric: took 4m4.488869114s to restartPrimaryControlPlane
	W0814 17:41:42.007822   80228 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:42.007871   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:42.513259   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:45.013455   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:46.541593   80228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.533697889s)
	I0814 17:41:46.541676   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:46.556181   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:41:46.565943   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:41:46.575481   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:41:46.575501   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:41:46.575549   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:41:46.585143   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:41:46.585202   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:41:46.595157   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:41:46.604539   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:41:46.604600   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:41:46.613345   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.622186   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:41:46.622242   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.631221   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:41:46.640649   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:41:46.640706   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:41:46.650161   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:41:46.724104   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:41:46.724182   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:41:46.860463   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:41:46.860606   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:41:46.860725   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:41:47.036697   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:41:47.038444   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:41:47.038561   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:41:47.038670   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:41:47.038775   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:41:47.038860   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:41:47.038973   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:41:47.039067   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:41:47.039172   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:41:47.039256   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:41:47.039359   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:41:47.039456   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:41:47.039516   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:41:47.039587   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:41:47.278696   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:41:47.664300   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:41:47.988137   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:41:48.076560   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:41:48.093447   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:41:48.094656   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:41:48.094793   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:41:48.253225   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:41:48.255034   80228 out.go:204]   - Booting up control plane ...
	I0814 17:41:48.255160   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:41:48.259041   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:41:48.260074   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:41:48.260862   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:41:48.262910   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:41:47.513415   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:50.012937   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:52.013499   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:54.514150   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:57.013146   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:59.013393   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:01.014185   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:01.441261   79871 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.187019598s)
	I0814 17:42:01.441333   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:01.457213   79871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:42:01.466802   79871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:42:01.475719   79871 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:42:01.475736   79871 kubeadm.go:157] found existing configuration files:
	
	I0814 17:42:01.475784   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 17:42:01.484555   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:42:01.484618   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:42:01.493956   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 17:42:01.503873   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:42:01.503923   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:42:01.514710   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 17:42:01.524473   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:42:01.524531   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:42:01.534749   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 17:42:01.544491   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:42:01.544558   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:42:01.555481   79871 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:42:01.599801   79871 kubeadm.go:310] W0814 17:42:01.575622    2598 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:01.600615   79871 kubeadm.go:310] W0814 17:42:01.576625    2598 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:01.703064   79871 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:42:03.513007   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:05.514241   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:09.627141   79871 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:42:09.627216   79871 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:42:09.627344   79871 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:42:09.627480   79871 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:42:09.627638   79871 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:42:09.627717   79871 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:42:09.629272   79871 out.go:204]   - Generating certificates and keys ...
	I0814 17:42:09.629370   79871 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:42:09.629472   79871 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:42:09.629592   79871 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:42:09.629712   79871 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:42:09.629780   79871 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:42:09.629826   79871 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:42:09.629898   79871 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:42:09.629963   79871 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:42:09.630076   79871 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:42:09.630198   79871 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:42:09.630253   79871 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:42:09.630314   79871 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:42:09.630357   79871 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:42:09.630412   79871 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:42:09.630457   79871 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:42:09.630509   79871 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:42:09.630560   79871 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:42:09.630629   79871 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:42:09.630688   79871 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:42:09.632664   79871 out.go:204]   - Booting up control plane ...
	I0814 17:42:09.632763   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:42:09.632878   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:42:09.632963   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:42:09.633100   79871 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:42:09.633207   79871 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:42:09.633252   79871 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:42:09.633412   79871 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:42:09.633542   79871 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:42:09.633624   79871 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004125702s
	I0814 17:42:09.633727   79871 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:42:09.633814   79871 kubeadm.go:310] [api-check] The API server is healthy after 4.501648596s
	I0814 17:42:09.633967   79871 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:42:09.634119   79871 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:42:09.634169   79871 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:42:09.634328   79871 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-885666 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:42:09.634400   79871 kubeadm.go:310] [bootstrap-token] Using token: 17ct2j.hazurgskaspe26qx
	I0814 17:42:09.635732   79871 out.go:204]   - Configuring RBAC rules ...
	I0814 17:42:09.635859   79871 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:42:09.635990   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:42:09.636141   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:42:09.636250   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:42:09.636347   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:42:09.636485   79871 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:42:09.636657   79871 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:42:09.636708   79871 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:42:09.636747   79871 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:42:09.636753   79871 kubeadm.go:310] 
	I0814 17:42:09.636813   79871 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:42:09.636835   79871 kubeadm.go:310] 
	I0814 17:42:09.636972   79871 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:42:09.636995   79871 kubeadm.go:310] 
	I0814 17:42:09.637029   79871 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:42:09.637120   79871 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:42:09.637185   79871 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:42:09.637195   79871 kubeadm.go:310] 
	I0814 17:42:09.637267   79871 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:42:09.637277   79871 kubeadm.go:310] 
	I0814 17:42:09.637315   79871 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:42:09.637321   79871 kubeadm.go:310] 
	I0814 17:42:09.637384   79871 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:42:09.637461   79871 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:42:09.637523   79871 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:42:09.637529   79871 kubeadm.go:310] 
	I0814 17:42:09.637623   79871 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:42:09.637691   79871 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:42:09.637703   79871 kubeadm.go:310] 
	I0814 17:42:09.637779   79871 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 17ct2j.hazurgskaspe26qx \
	I0814 17:42:09.637866   79871 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:42:09.637890   79871 kubeadm.go:310] 	--control-plane 
	I0814 17:42:09.637899   79871 kubeadm.go:310] 
	I0814 17:42:09.638010   79871 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:42:09.638020   79871 kubeadm.go:310] 
	I0814 17:42:09.638098   79871 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 17ct2j.hazurgskaspe26qx \
	I0814 17:42:09.638211   79871 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:42:09.638234   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:42:09.638246   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:42:09.639748   79871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:42:09.641031   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:42:09.652173   79871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:42:09.670482   79871 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:42:09.670582   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:09.670582   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-885666 minikube.k8s.io/updated_at=2024_08_14T17_42_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=default-k8s-diff-port-885666 minikube.k8s.io/primary=true
	I0814 17:42:09.703097   79871 ops.go:34] apiserver oom_adj: -16
	I0814 17:42:09.881340   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:10.381470   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:07.516539   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:10.015456   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:10.882013   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:11.382239   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:11.881638   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:12.381703   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:12.881401   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:13.381524   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:13.881402   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:14.381696   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:14.498441   79871 kubeadm.go:1113] duration metric: took 4.827929439s to wait for elevateKubeSystemPrivileges
	I0814 17:42:14.498474   79871 kubeadm.go:394] duration metric: took 4m59.336328921s to StartCluster
	I0814 17:42:14.498493   79871 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:42:14.498581   79871 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:42:14.501029   79871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:42:14.501309   79871 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:42:14.501432   79871 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:42:14.501508   79871 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501541   79871 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.501550   79871 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:42:14.501577   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.501590   79871 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:42:14.501619   79871 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501667   79871 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.501677   79871 addons.go:243] addon metrics-server should already be in state true
	I0814 17:42:14.501716   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.501593   79871 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501840   79871 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-885666"
	I0814 17:42:14.502106   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502142   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502160   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502174   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502176   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502199   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502371   79871 out.go:177] * Verifying Kubernetes components...
	I0814 17:42:14.504085   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:42:14.519401   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0814 17:42:14.519631   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0814 17:42:14.520085   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.520196   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.520701   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.520722   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.520789   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0814 17:42:14.520978   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.520994   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.521255   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.521519   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.521524   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.521703   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.522021   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.522051   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.522548   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.522572   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.522864   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.523507   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.523550   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.525737   79871 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.525759   79871 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:42:14.525789   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.526144   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.526170   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.538930   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0814 17:42:14.538995   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42207
	I0814 17:42:14.539567   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.539594   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.540125   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.540138   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.540266   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.540289   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.540624   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.540770   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.540825   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.540970   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.542540   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.542848   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.544439   79871 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:42:14.544444   79871 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:42:14.544881   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0814 17:42:14.545315   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.545575   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:42:14.545586   79871 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:42:14.545601   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.545672   79871 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:42:14.545691   79871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:42:14.545708   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.545750   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.545759   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.546339   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.547167   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.547290   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.549794   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.549829   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550300   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.550324   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.550423   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550637   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.550707   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.550965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.551025   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.551119   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.551168   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.551302   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.551658   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.567203   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37661
	I0814 17:42:14.567613   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.568141   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.568167   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.568484   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.568678   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.570337   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.570867   79871 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:42:14.570888   79871 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:42:14.570906   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.574091   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.574562   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.574586   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.574667   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.574857   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.575039   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.575197   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.675594   79871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:42:14.694520   79871 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-885666" to be "Ready" ...
	I0814 17:42:14.702650   79871 node_ready.go:49] node "default-k8s-diff-port-885666" has status "Ready":"True"
	I0814 17:42:14.702672   79871 node_ready.go:38] duration metric: took 8.119351ms for node "default-k8s-diff-port-885666" to be "Ready" ...
	I0814 17:42:14.702684   79871 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:14.707535   79871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:14.762686   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:42:14.805275   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:42:14.837118   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:42:14.837143   79871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:42:14.881848   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:42:14.881872   79871 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:42:14.902731   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.902762   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.903058   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.903076   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.903092   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.903111   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.903448   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:14.903484   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.903493   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.908980   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.908995   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.909239   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:14.909310   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.909336   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.920224   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:42:14.920249   79871 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:42:14.955256   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:42:15.297167   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.297190   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.297544   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:15.297602   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.297631   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.297649   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.297659   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.297865   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.297885   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.842348   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.842376   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.842688   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.842707   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.842716   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.842738   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:15.842805   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.843057   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.843070   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.843081   79871 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-885666"
	I0814 17:42:15.844747   79871 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0814 17:42:12.513055   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:14.514298   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:15.845895   79871 addons.go:510] duration metric: took 1.344461878s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0814 17:42:16.714014   79871 pod_ready.go:102] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:18.715243   79871 pod_ready.go:102] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:17.013231   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:19.013966   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:20.507978   79367 pod_ready.go:81] duration metric: took 4m0.001138158s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" ...
	E0814 17:42:20.508026   79367 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 17:42:20.508048   79367 pod_ready.go:38] duration metric: took 4m6.305785273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:20.508081   79367 kubeadm.go:597] duration metric: took 4m13.455842043s to restartPrimaryControlPlane
	W0814 17:42:20.508145   79367 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:42:20.508186   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:42:20.714660   79871 pod_ready.go:92] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.714687   79871 pod_ready.go:81] duration metric: took 6.007129076s for pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.714696   79871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.719517   79871 pod_ready.go:92] pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.719542   79871 pod_ready.go:81] duration metric: took 4.838754ms for pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.719554   79871 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.724787   79871 pod_ready.go:92] pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.724816   79871 pod_ready.go:81] duration metric: took 5.250194ms for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.724834   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.731431   79871 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.731456   79871 pod_ready.go:81] duration metric: took 1.00661383s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.731468   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.736442   79871 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.736467   79871 pod_ready.go:81] duration metric: took 4.989787ms for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.736480   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-254cb" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.911495   79871 pod_ready.go:92] pod "kube-proxy-254cb" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.911520   79871 pod_ready.go:81] duration metric: took 175.03218ms for pod "kube-proxy-254cb" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.911529   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:22.311700   79871 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:22.311730   79871 pod_ready.go:81] duration metric: took 400.194781ms for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:22.311739   79871 pod_ready.go:38] duration metric: took 7.609043377s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:22.311752   79871 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:42:22.311800   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:42:22.326995   79871 api_server.go:72] duration metric: took 7.825649112s to wait for apiserver process to appear ...
	I0814 17:42:22.327018   79871 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:42:22.327036   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:42:22.331069   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 200:
	ok
	I0814 17:42:22.332077   79871 api_server.go:141] control plane version: v1.31.0
	I0814 17:42:22.332096   79871 api_server.go:131] duration metric: took 5.0724ms to wait for apiserver health ...
	I0814 17:42:22.332103   79871 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:42:22.514565   79871 system_pods.go:59] 9 kube-system pods found
	I0814 17:42:22.514595   79871 system_pods.go:61] "coredns-6f6b679f8f-k5qnj" [cf05f7e2-29de-4437-b182-53cd65350631] Running
	I0814 17:42:22.514601   79871 system_pods.go:61] "coredns-6f6b679f8f-nm28w" [ba1fe4d0-1869-49ec-a281-18119a2ad26b] Running
	I0814 17:42:22.514606   79871 system_pods.go:61] "etcd-default-k8s-diff-port-885666" [62581194-9ace-41f9-ba0d-0df04b7dca41] Running
	I0814 17:42:22.514610   79871 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-885666" [ea586a7b-5ca4-48d7-8be3-c13770e0cb40] Running
	I0814 17:42:22.514614   79871 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-885666" [9610bcca-feef-45f2-8b36-a6e96d364e18] Running
	I0814 17:42:22.514617   79871 system_pods.go:61] "kube-proxy-254cb" [e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb] Running
	I0814 17:42:22.514620   79871 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-885666" [872997ac-b438-4be5-b187-af171228770c] Running
	I0814 17:42:22.514626   79871 system_pods.go:61] "metrics-server-6867b74b74-5q86r" [849df692-9f8e-455e-b209-25801151513b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:42:22.514631   79871 system_pods.go:61] "storage-provisioner" [5128eea6-234c-4aea-a9b7-835e840a31a3] Running
	I0814 17:42:22.514639   79871 system_pods.go:74] duration metric: took 182.531543ms to wait for pod list to return data ...
	I0814 17:42:22.514647   79871 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:42:22.713101   79871 default_sa.go:45] found service account: "default"
	I0814 17:42:22.713125   79871 default_sa.go:55] duration metric: took 198.471814ms for default service account to be created ...
	I0814 17:42:22.713136   79871 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:42:22.914576   79871 system_pods.go:86] 9 kube-system pods found
	I0814 17:42:22.914619   79871 system_pods.go:89] "coredns-6f6b679f8f-k5qnj" [cf05f7e2-29de-4437-b182-53cd65350631] Running
	I0814 17:42:22.914628   79871 system_pods.go:89] "coredns-6f6b679f8f-nm28w" [ba1fe4d0-1869-49ec-a281-18119a2ad26b] Running
	I0814 17:42:22.914635   79871 system_pods.go:89] "etcd-default-k8s-diff-port-885666" [62581194-9ace-41f9-ba0d-0df04b7dca41] Running
	I0814 17:42:22.914643   79871 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-885666" [ea586a7b-5ca4-48d7-8be3-c13770e0cb40] Running
	I0814 17:42:22.914650   79871 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-885666" [9610bcca-feef-45f2-8b36-a6e96d364e18] Running
	I0814 17:42:22.914657   79871 system_pods.go:89] "kube-proxy-254cb" [e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb] Running
	I0814 17:42:22.914665   79871 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-885666" [872997ac-b438-4be5-b187-af171228770c] Running
	I0814 17:42:22.914678   79871 system_pods.go:89] "metrics-server-6867b74b74-5q86r" [849df692-9f8e-455e-b209-25801151513b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:42:22.914689   79871 system_pods.go:89] "storage-provisioner" [5128eea6-234c-4aea-a9b7-835e840a31a3] Running
	I0814 17:42:22.914705   79871 system_pods.go:126] duration metric: took 201.563199ms to wait for k8s-apps to be running ...
	I0814 17:42:22.914716   79871 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:42:22.914768   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:22.928499   79871 system_svc.go:56] duration metric: took 13.774119ms WaitForService to wait for kubelet
	I0814 17:42:22.928525   79871 kubeadm.go:582] duration metric: took 8.427183796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:42:22.928543   79871 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:42:23.112363   79871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:42:23.112398   79871 node_conditions.go:123] node cpu capacity is 2
	I0814 17:42:23.112410   79871 node_conditions.go:105] duration metric: took 183.861382ms to run NodePressure ...
	I0814 17:42:23.112423   79871 start.go:241] waiting for startup goroutines ...
	I0814 17:42:23.112432   79871 start.go:246] waiting for cluster config update ...
	I0814 17:42:23.112446   79871 start.go:255] writing updated cluster config ...
	I0814 17:42:23.112792   79871 ssh_runner.go:195] Run: rm -f paused
	I0814 17:42:23.162700   79871 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:42:23.164689   79871 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-885666" cluster and "default" namespace by default
	I0814 17:42:28.263217   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:42:28.263629   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:28.263853   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:33.264169   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:33.264403   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:43.264648   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:43.264858   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:46.859569   79367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.351355314s)
	I0814 17:42:46.859653   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:46.875530   79367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:42:46.884772   79367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:42:46.894185   79367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:42:46.894208   79367 kubeadm.go:157] found existing configuration files:
	
	I0814 17:42:46.894258   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:42:46.903690   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:42:46.903748   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:42:46.913277   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:42:46.922120   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:42:46.922173   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:42:46.931143   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:42:46.939936   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:42:46.939997   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:42:46.949257   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:42:46.958109   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:42:46.958169   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:42:46.967609   79367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:42:47.010119   79367 kubeadm.go:310] W0814 17:42:46.983769    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:47.010889   79367 kubeadm.go:310] W0814 17:42:46.984558    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:47.122746   79367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:42:55.571963   79367 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:42:55.572017   79367 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:42:55.572127   79367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:42:55.572236   79367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:42:55.572314   79367 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:42:55.572385   79367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:42:55.574178   79367 out.go:204]   - Generating certificates and keys ...
	I0814 17:42:55.574288   79367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:42:55.574372   79367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:42:55.574485   79367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:42:55.574573   79367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:42:55.574669   79367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:42:55.574740   79367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:42:55.574811   79367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:42:55.574909   79367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:42:55.575014   79367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:42:55.575135   79367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:42:55.575187   79367 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:42:55.575238   79367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:42:55.575288   79367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:42:55.575359   79367 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:42:55.575438   79367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:42:55.575521   79367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:42:55.575608   79367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:42:55.575759   79367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:42:55.575869   79367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:42:55.577331   79367 out.go:204]   - Booting up control plane ...
	I0814 17:42:55.577429   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:42:55.577511   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:42:55.577587   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:42:55.577773   79367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:42:55.577908   79367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:42:55.577968   79367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:42:55.578152   79367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:42:55.578301   79367 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:42:55.578368   79367 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.938552ms
	I0814 17:42:55.578428   79367 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:42:55.578480   79367 kubeadm.go:310] [api-check] The API server is healthy after 5.00239154s
	I0814 17:42:55.578605   79367 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:42:55.578777   79367 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:42:55.578863   79367 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:42:55.579025   79367 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-545149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:42:55.579100   79367 kubeadm.go:310] [bootstrap-token] Using token: qzd0yh.k8a8j7f6vmqndeav
	I0814 17:42:55.580318   79367 out.go:204]   - Configuring RBAC rules ...
	I0814 17:42:55.580429   79367 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:42:55.580503   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:42:55.580683   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:42:55.580839   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:42:55.580935   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:42:55.581047   79367 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:42:55.581197   79367 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:42:55.581235   79367 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:42:55.581279   79367 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:42:55.581285   79367 kubeadm.go:310] 
	I0814 17:42:55.581339   79367 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:42:55.581355   79367 kubeadm.go:310] 
	I0814 17:42:55.581470   79367 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:42:55.581480   79367 kubeadm.go:310] 
	I0814 17:42:55.581519   79367 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:42:55.581586   79367 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:42:55.581654   79367 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:42:55.581663   79367 kubeadm.go:310] 
	I0814 17:42:55.581749   79367 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:42:55.581757   79367 kubeadm.go:310] 
	I0814 17:42:55.581798   79367 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:42:55.581804   79367 kubeadm.go:310] 
	I0814 17:42:55.581857   79367 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:42:55.581944   79367 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:42:55.582007   79367 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:42:55.582014   79367 kubeadm.go:310] 
	I0814 17:42:55.582085   79367 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:42:55.582148   79367 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:42:55.582154   79367 kubeadm.go:310] 
	I0814 17:42:55.582221   79367 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qzd0yh.k8a8j7f6vmqndeav \
	I0814 17:42:55.582313   79367 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:42:55.582333   79367 kubeadm.go:310] 	--control-plane 
	I0814 17:42:55.582336   79367 kubeadm.go:310] 
	I0814 17:42:55.582426   79367 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:42:55.582434   79367 kubeadm.go:310] 
	I0814 17:42:55.582518   79367 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qzd0yh.k8a8j7f6vmqndeav \
	I0814 17:42:55.582678   79367 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:42:55.582691   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:42:55.582697   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:42:55.584337   79367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:42:55.585493   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:42:55.596201   79367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:42:55.617012   79367 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:42:55.617115   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:55.617152   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-545149 minikube.k8s.io/updated_at=2024_08_14T17_42_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=no-preload-545149 minikube.k8s.io/primary=true
	I0814 17:42:55.794262   79367 ops.go:34] apiserver oom_adj: -16
	I0814 17:42:55.794421   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:56.294450   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:56.795280   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:57.294604   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:57.794700   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:58.294863   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:58.795404   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:59.295066   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:59.794529   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:43:00.294720   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:43:00.409254   79367 kubeadm.go:1113] duration metric: took 4.79220609s to wait for elevateKubeSystemPrivileges
	I0814 17:43:00.409300   79367 kubeadm.go:394] duration metric: took 4m53.401266889s to StartCluster
	I0814 17:43:00.409323   79367 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:43:00.409419   79367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:43:00.411076   79367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:43:00.411313   79367 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:43:00.411438   79367 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:43:00.411521   79367 addons.go:69] Setting storage-provisioner=true in profile "no-preload-545149"
	I0814 17:43:00.411529   79367 addons.go:69] Setting default-storageclass=true in profile "no-preload-545149"
	I0814 17:43:00.411552   79367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-545149"
	I0814 17:43:00.411554   79367 addons.go:234] Setting addon storage-provisioner=true in "no-preload-545149"
	I0814 17:43:00.411564   79367 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:43:00.411568   79367 addons.go:69] Setting metrics-server=true in profile "no-preload-545149"
	W0814 17:43:00.411566   79367 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:43:00.411601   79367 addons.go:234] Setting addon metrics-server=true in "no-preload-545149"
	W0814 17:43:00.411612   79367 addons.go:243] addon metrics-server should already be in state true
	I0814 17:43:00.411637   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.411646   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.411922   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.411954   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412019   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.412053   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412076   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.412103   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412914   79367 out.go:177] * Verifying Kubernetes components...
	I0814 17:43:00.414471   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:43:00.427965   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0814 17:43:00.427966   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41043
	I0814 17:43:00.428460   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.428608   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.428985   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.429004   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.429130   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.429147   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.429206   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0814 17:43:00.429346   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.429443   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.429498   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.429543   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.430131   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.430152   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.430435   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.430446   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.430718   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.431238   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.431270   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.433273   79367 addons.go:234] Setting addon default-storageclass=true in "no-preload-545149"
	W0814 17:43:00.433292   79367 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:43:00.433319   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.433551   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.433581   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.450138   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I0814 17:43:00.450327   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I0814 17:43:00.450697   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.450818   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.451527   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.451547   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.451695   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.451706   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.451958   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.452224   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.452283   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.453110   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.453141   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.453937   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.455467   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I0814 17:43:00.455825   79367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:43:00.455934   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.456456   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.456479   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.456964   79367 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:43:00.456981   79367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:43:00.456999   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.457000   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.457144   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.459264   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.460208   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.460606   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.460636   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.460750   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.460858   79367 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:43:00.460989   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.461150   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.461281   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.462118   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:43:00.462138   79367 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:43:00.462156   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.465200   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.465643   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.465710   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.465829   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.466004   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.466165   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.466312   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.478054   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0814 17:43:00.478616   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.479176   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.479198   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.479536   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.479725   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.481351   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.481574   79367 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:43:00.481588   79367 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:43:00.481606   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.484454   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.484738   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.484771   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.484989   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.485222   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.485370   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.485485   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.617562   79367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:43:00.665134   79367 node_ready.go:35] waiting up to 6m0s for node "no-preload-545149" to be "Ready" ...
	I0814 17:43:00.673659   79367 node_ready.go:49] node "no-preload-545149" has status "Ready":"True"
	I0814 17:43:00.673680   79367 node_ready.go:38] duration metric: took 8.508683ms for node "no-preload-545149" to be "Ready" ...
	I0814 17:43:00.673689   79367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:43:00.680313   79367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:00.810401   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:43:00.827621   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:43:00.871727   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:43:00.871752   79367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:43:00.969061   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:43:00.969088   79367 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:43:01.103808   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:43:01.103839   79367 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:43:01.198160   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:43:01.880623   79367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.052957744s)
	I0814 17:43:01.880683   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.880697   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.880749   79367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070305568s)
	I0814 17:43:01.880785   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.880804   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881075   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881093   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881103   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.881115   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881248   79367 main.go:141] libmachine: (no-preload-545149) DBG | Closing plugin on server side
	I0814 17:43:01.881284   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881312   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881336   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.881375   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881385   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881396   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881682   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881703   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.896050   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.896076   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.896351   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.896370   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.131404   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:02.131427   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:02.131744   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:02.131768   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.131780   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:02.131788   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:02.132004   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:02.132026   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.132042   79367 addons.go:475] Verifying addon metrics-server=true in "no-preload-545149"
	I0814 17:43:02.133699   79367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 17:43:03.265508   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:03.265720   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:02.135365   79367 addons.go:510] duration metric: took 1.72392081s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 17:43:02.687160   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:05.186062   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:07.187193   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:09.188957   79367 pod_ready.go:92] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.188990   79367 pod_ready.go:81] duration metric: took 8.508650006s for pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.189003   79367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.194469   79367 pod_ready.go:92] pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.194492   79367 pod_ready.go:81] duration metric: took 5.48133ms for pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.194501   79367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.199127   79367 pod_ready.go:92] pod "etcd-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.199150   79367 pod_ready.go:81] duration metric: took 4.643296ms for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.199159   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.203804   79367 pod_ready.go:92] pod "kube-apiserver-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.203825   79367 pod_ready.go:81] duration metric: took 4.659513ms for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.203837   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.208443   79367 pod_ready.go:92] pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.208465   79367 pod_ready.go:81] duration metric: took 4.620634ms for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.208479   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6bps" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.584443   79367 pod_ready.go:92] pod "kube-proxy-s6bps" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.584471   79367 pod_ready.go:81] duration metric: took 375.985094ms for pod "kube-proxy-s6bps" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.584481   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.985476   79367 pod_ready.go:92] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.985504   79367 pod_ready.go:81] duration metric: took 401.014791ms for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.985515   79367 pod_ready.go:38] duration metric: took 9.311816641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:43:09.985534   79367 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:43:09.985603   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:43:10.002239   79367 api_server.go:72] duration metric: took 9.590875358s to wait for apiserver process to appear ...
	I0814 17:43:10.002276   79367 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:43:10.002304   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:43:10.009410   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I0814 17:43:10.010351   79367 api_server.go:141] control plane version: v1.31.0
	I0814 17:43:10.010381   79367 api_server.go:131] duration metric: took 8.098086ms to wait for apiserver health ...
	I0814 17:43:10.010389   79367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:43:10.189597   79367 system_pods.go:59] 9 kube-system pods found
	I0814 17:43:10.189629   79367 system_pods.go:61] "coredns-6f6b679f8f-h4dmc" [33f2fdca-15ba-430f-989f-3c569f33a76a] Running
	I0814 17:43:10.189634   79367 system_pods.go:61] "coredns-6f6b679f8f-mpfqf" [7b0e3bf4-41d9-4151-8255-37881e596c20] Running
	I0814 17:43:10.189638   79367 system_pods.go:61] "etcd-no-preload-545149" [5fc2782c-a4c3-46d6-b2d2-3c9325f17ae4] Running
	I0814 17:43:10.189642   79367 system_pods.go:61] "kube-apiserver-no-preload-545149" [3cdde3b9-70b4-4e5e-bc48-ab207c903437] Running
	I0814 17:43:10.189646   79367 system_pods.go:61] "kube-controller-manager-no-preload-545149" [c8f222c9-95a1-4acf-9ca3-068832ed808f] Running
	I0814 17:43:10.189649   79367 system_pods.go:61] "kube-proxy-s6bps" [9165c654-568f-4206-878c-f0c88ccd38cd] Running
	I0814 17:43:10.189652   79367 system_pods.go:61] "kube-scheduler-no-preload-545149" [423d82b6-cb92-408b-a5d6-95305c91400c] Running
	I0814 17:43:10.189658   79367 system_pods.go:61] "metrics-server-6867b74b74-7qljd" [0f0e5d07-eb28-46b3-9270-554006151eda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:43:10.189662   79367 system_pods.go:61] "storage-provisioner" [bc80ba99-eecf-4eb1-bd78-f88792cb3e5a] Running
	I0814 17:43:10.189670   79367 system_pods.go:74] duration metric: took 179.275641ms to wait for pod list to return data ...
	I0814 17:43:10.189678   79367 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:43:10.385690   79367 default_sa.go:45] found service account: "default"
	I0814 17:43:10.385715   79367 default_sa.go:55] duration metric: took 196.030333ms for default service account to be created ...
	I0814 17:43:10.385723   79367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:43:10.590237   79367 system_pods.go:86] 9 kube-system pods found
	I0814 17:43:10.590272   79367 system_pods.go:89] "coredns-6f6b679f8f-h4dmc" [33f2fdca-15ba-430f-989f-3c569f33a76a] Running
	I0814 17:43:10.590279   79367 system_pods.go:89] "coredns-6f6b679f8f-mpfqf" [7b0e3bf4-41d9-4151-8255-37881e596c20] Running
	I0814 17:43:10.590285   79367 system_pods.go:89] "etcd-no-preload-545149" [5fc2782c-a4c3-46d6-b2d2-3c9325f17ae4] Running
	I0814 17:43:10.590291   79367 system_pods.go:89] "kube-apiserver-no-preload-545149" [3cdde3b9-70b4-4e5e-bc48-ab207c903437] Running
	I0814 17:43:10.590299   79367 system_pods.go:89] "kube-controller-manager-no-preload-545149" [c8f222c9-95a1-4acf-9ca3-068832ed808f] Running
	I0814 17:43:10.590306   79367 system_pods.go:89] "kube-proxy-s6bps" [9165c654-568f-4206-878c-f0c88ccd38cd] Running
	I0814 17:43:10.590312   79367 system_pods.go:89] "kube-scheduler-no-preload-545149" [423d82b6-cb92-408b-a5d6-95305c91400c] Running
	I0814 17:43:10.590322   79367 system_pods.go:89] "metrics-server-6867b74b74-7qljd" [0f0e5d07-eb28-46b3-9270-554006151eda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:43:10.590335   79367 system_pods.go:89] "storage-provisioner" [bc80ba99-eecf-4eb1-bd78-f88792cb3e5a] Running
	I0814 17:43:10.590351   79367 system_pods.go:126] duration metric: took 204.620982ms to wait for k8s-apps to be running ...
	I0814 17:43:10.590364   79367 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:43:10.590418   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:10.605594   79367 system_svc.go:56] duration metric: took 15.223089ms WaitForService to wait for kubelet
	I0814 17:43:10.605624   79367 kubeadm.go:582] duration metric: took 10.194267533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:43:10.605644   79367 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:43:10.786127   79367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:43:10.786160   79367 node_conditions.go:123] node cpu capacity is 2
	I0814 17:43:10.786173   79367 node_conditions.go:105] duration metric: took 180.522994ms to run NodePressure ...
	I0814 17:43:10.786187   79367 start.go:241] waiting for startup goroutines ...
	I0814 17:43:10.786197   79367 start.go:246] waiting for cluster config update ...
	I0814 17:43:10.786210   79367 start.go:255] writing updated cluster config ...
	I0814 17:43:10.786498   79367 ssh_runner.go:195] Run: rm -f paused
	I0814 17:43:10.834139   79367 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:43:10.836315   79367 out.go:177] * Done! kubectl is now configured to use "no-preload-545149" cluster and "default" namespace by default
	I0814 17:43:43.267316   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:43.267596   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:43.267623   80228 kubeadm.go:310] 
	I0814 17:43:43.267680   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:43:43.267757   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:43:43.267778   80228 kubeadm.go:310] 
	I0814 17:43:43.267839   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:43:43.267894   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:43:43.268029   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:43:43.268044   80228 kubeadm.go:310] 
	I0814 17:43:43.268190   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:43:43.268247   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:43:43.268296   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:43:43.268305   80228 kubeadm.go:310] 
	I0814 17:43:43.268446   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:43:43.268561   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:43:43.268572   80228 kubeadm.go:310] 
	I0814 17:43:43.268741   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:43:43.268907   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:43:43.269021   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:43:43.269120   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:43:43.269131   80228 kubeadm.go:310] 
	I0814 17:43:43.269560   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:43:43.269642   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:43:43.269698   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 17:43:43.269809   80228 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 17:43:43.269853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:43:43.733975   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:43.748632   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:43:43.758474   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:43:43.758493   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:43:43.758543   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:43:43.767721   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:43:43.767777   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:43:43.777259   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:43:43.786562   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:43:43.786623   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:43:43.795253   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.803677   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:43:43.803733   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.812416   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:43:43.821020   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:43:43.821075   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:43:43.829709   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:43:44.024836   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:45:40.060126   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:45:40.060266   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 17:45:40.061931   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:45:40.062003   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:45:40.062110   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:45:40.062231   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:45:40.062360   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:45:40.062459   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:45:40.063940   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:45:40.064041   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:45:40.064124   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:45:40.064230   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:45:40.064305   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:45:40.064398   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:45:40.064471   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:45:40.064550   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:45:40.064632   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:45:40.064712   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:45:40.064798   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:45:40.064857   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:45:40.064913   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:45:40.064956   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:45:40.065040   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:45:40.065146   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:45:40.065222   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:45:40.065366   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:45:40.065490   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:45:40.065547   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:45:40.065648   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:45:40.068108   80228 out.go:204]   - Booting up control plane ...
	I0814 17:45:40.068211   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:45:40.068294   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:45:40.068395   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:45:40.068522   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:45:40.068675   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:45:40.068751   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:45:40.068843   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069048   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069141   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069393   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069510   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069756   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069823   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069982   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070051   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.070204   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070211   80228 kubeadm.go:310] 
	I0814 17:45:40.070244   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:45:40.070291   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:45:40.070299   80228 kubeadm.go:310] 
	I0814 17:45:40.070337   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:45:40.070379   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:45:40.070504   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:45:40.070521   80228 kubeadm.go:310] 
	I0814 17:45:40.070673   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:45:40.070723   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:45:40.070764   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:45:40.070776   80228 kubeadm.go:310] 
	I0814 17:45:40.070876   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:45:40.070945   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:45:40.070953   80228 kubeadm.go:310] 
	I0814 17:45:40.071045   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:45:40.071151   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:45:40.071246   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:45:40.071363   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:45:40.071453   80228 kubeadm.go:310] 
	I0814 17:45:40.071481   80228 kubeadm.go:394] duration metric: took 8m2.599023024s to StartCluster
	I0814 17:45:40.071554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:45:40.071617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:45:40.115691   80228 cri.go:89] found id: ""
	I0814 17:45:40.115719   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.115727   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:45:40.115734   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:45:40.115798   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:45:40.155537   80228 cri.go:89] found id: ""
	I0814 17:45:40.155566   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.155574   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:45:40.155580   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:45:40.155645   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:45:40.189570   80228 cri.go:89] found id: ""
	I0814 17:45:40.189604   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.189616   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:45:40.189625   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:45:40.189708   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:45:40.222496   80228 cri.go:89] found id: ""
	I0814 17:45:40.222521   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.222528   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:45:40.222533   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:45:40.222590   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:45:40.255095   80228 cri.go:89] found id: ""
	I0814 17:45:40.255129   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.255142   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:45:40.255151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:45:40.255233   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:45:40.290297   80228 cri.go:89] found id: ""
	I0814 17:45:40.290326   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.290341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:45:40.290348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:45:40.290396   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:45:40.326660   80228 cri.go:89] found id: ""
	I0814 17:45:40.326685   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.326695   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:45:40.326701   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:45:40.326764   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:45:40.359867   80228 cri.go:89] found id: ""
	I0814 17:45:40.359896   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.359908   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:45:40.359918   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:45:40.359933   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:45:40.397513   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:45:40.397543   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:45:40.451744   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:45:40.451778   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:45:40.475817   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:45:40.475843   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:45:40.575913   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:45:40.575933   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:45:40.575946   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0814 17:45:40.683455   80228 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 17:45:40.683509   80228 out.go:239] * 
	W0814 17:45:40.683587   80228 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.683623   80228 out.go:239] * 
	W0814 17:45:40.684431   80228 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 17:45:40.688043   80228 out.go:177] 
	W0814 17:45:40.689238   80228 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.689291   80228 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 17:45:40.689318   80228 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 17:45:40.690913   80228 out.go:177] 
	
	
	==> CRI-O <==
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.220286918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657885220265525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c64a9f2a-cf30-437c-a3f0-969f53c3645e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.220711303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9ba4a14-2ebb-4b39-babc-ad11ab2dc055 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.220771500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9ba4a14-2ebb-4b39-babc-ad11ab2dc055 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.220974331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:503e9df483a627bb3855cc575952c002326a861e96829096b407406eb5983f09,PodSandboxId:c85483bcc56c2a0d0777da1baa3907a957edc62433f65ad25cb4383190b20390,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657336334630107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-254cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb9ed10c9dfa9f82fa319eec929efc17c724147ce4ddb13fff131efd549474,PodSandboxId:ff00e43e463e38e4145902c004d052b6a2bcc839284155c096edb200afb06d1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657336327637917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5128eea6-234c-4aea-a9b7-835e840a31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c6b70d58c277e3b0387e086c84726ddcc3a03ccf7b66d2e89d918282324a2e,PodSandboxId:f09e9cfc17c5f6ebfd6f1ca8254a7fbd68a9380935213f14e0c6b2da173fdd82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335837826893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nm28w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fe4d0-1869-49ec-a281-18119a2ad26b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2d721dacbaca5a99bee7fbf879baa4daefb16cb3958142bc5caf2adb228366,PodSandboxId:ebd5ed6cc8e2e1f5024c47dc25d579cfae1ccd301271f7a26dd69dac669d8f67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335766000146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k5qnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf05f7e2-29de-4437-b182-5
3cd65350631,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17baf91a2a7f6358ae63f23dc0895492f2dd397ad7cff6a73b4c8c365f5ad9d,PodSandboxId:5f7bba7b439236b30b841000e022540071e032467e2e35ae33f0ccb9c3d08914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657323926969931
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d96126e303d8ee1f33f434b36ab0933,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d593b5b514c6759744bb5c123d33712566a2bc4944e019c89d91d768832a5f,PodSandboxId:bd8f0f711bacfc15386fb43b22c3fae23cfd42ce00ab99c5f724ac451ea5ddd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657323944015362,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fb3d7de1a23f009227be1c8d40c928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d572687dae0896402a546d4f4dbe24e379b932f68c3e0b3a3c3f8af35ba212c,PodSandboxId:88d5d70ea9c84cadc596ff883126d26cb63ab7e1c27ccc4b824d9132f1606142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657323897457621,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edcb95c4052750ecb4852e1b8a3f6476c996872cf7be8bb2b189ff0bd1bd8b2,PodSandboxId:cd3bc6dc0b59b37c6e9fa23fc31cd8430d2d7a7cc7a06f3b03ec5e1d794c97c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657323838111643,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836c94ed11c93508b4334cad9fff3a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbb4963f9ccff3d77ba5a2b01e3f98fc059d4d696e19e10bc46d45523e3b44,PodSandboxId:8722d35792d91589df21a449dd3ad27d7753ab57bafb835a3eb16ca6f2795c6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657038716131875,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9ba4a14-2ebb-4b39-babc-ad11ab2dc055 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.261528049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e434981-4375-49d0-9455-577ebd0d97ed name=/runtime.v1.RuntimeService/Version
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.261605452Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e434981-4375-49d0-9455-577ebd0d97ed name=/runtime.v1.RuntimeService/Version
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.262668486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12be1259-2354-4155-a914-79b96b3de2bd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.263061955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657885263040465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12be1259-2354-4155-a914-79b96b3de2bd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.263525087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9de5ba52-b0ae-4b5c-9fc1-01d4a4290428 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.263575865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9de5ba52-b0ae-4b5c-9fc1-01d4a4290428 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.263928126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:503e9df483a627bb3855cc575952c002326a861e96829096b407406eb5983f09,PodSandboxId:c85483bcc56c2a0d0777da1baa3907a957edc62433f65ad25cb4383190b20390,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657336334630107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-254cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb9ed10c9dfa9f82fa319eec929efc17c724147ce4ddb13fff131efd549474,PodSandboxId:ff00e43e463e38e4145902c004d052b6a2bcc839284155c096edb200afb06d1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657336327637917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5128eea6-234c-4aea-a9b7-835e840a31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c6b70d58c277e3b0387e086c84726ddcc3a03ccf7b66d2e89d918282324a2e,PodSandboxId:f09e9cfc17c5f6ebfd6f1ca8254a7fbd68a9380935213f14e0c6b2da173fdd82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335837826893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nm28w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fe4d0-1869-49ec-a281-18119a2ad26b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2d721dacbaca5a99bee7fbf879baa4daefb16cb3958142bc5caf2adb228366,PodSandboxId:ebd5ed6cc8e2e1f5024c47dc25d579cfae1ccd301271f7a26dd69dac669d8f67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335766000146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k5qnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf05f7e2-29de-4437-b182-5
3cd65350631,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17baf91a2a7f6358ae63f23dc0895492f2dd397ad7cff6a73b4c8c365f5ad9d,PodSandboxId:5f7bba7b439236b30b841000e022540071e032467e2e35ae33f0ccb9c3d08914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657323926969931
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d96126e303d8ee1f33f434b36ab0933,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d593b5b514c6759744bb5c123d33712566a2bc4944e019c89d91d768832a5f,PodSandboxId:bd8f0f711bacfc15386fb43b22c3fae23cfd42ce00ab99c5f724ac451ea5ddd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657323944015362,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fb3d7de1a23f009227be1c8d40c928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d572687dae0896402a546d4f4dbe24e379b932f68c3e0b3a3c3f8af35ba212c,PodSandboxId:88d5d70ea9c84cadc596ff883126d26cb63ab7e1c27ccc4b824d9132f1606142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657323897457621,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edcb95c4052750ecb4852e1b8a3f6476c996872cf7be8bb2b189ff0bd1bd8b2,PodSandboxId:cd3bc6dc0b59b37c6e9fa23fc31cd8430d2d7a7cc7a06f3b03ec5e1d794c97c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657323838111643,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836c94ed11c93508b4334cad9fff3a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbb4963f9ccff3d77ba5a2b01e3f98fc059d4d696e19e10bc46d45523e3b44,PodSandboxId:8722d35792d91589df21a449dd3ad27d7753ab57bafb835a3eb16ca6f2795c6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657038716131875,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9de5ba52-b0ae-4b5c-9fc1-01d4a4290428 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.300866746Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e43be7a4-b874-4632-aa64-7bb253c58d29 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.300936729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e43be7a4-b874-4632-aa64-7bb253c58d29 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.301834222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4429436-c3b1-4373-86cb-83a21d7234eb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.302281215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657885302258384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4429436-c3b1-4373-86cb-83a21d7234eb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.302677397Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9e2e713-bb4d-4115-8f9f-45791042333f name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.302732522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9e2e713-bb4d-4115-8f9f-45791042333f name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.302920565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:503e9df483a627bb3855cc575952c002326a861e96829096b407406eb5983f09,PodSandboxId:c85483bcc56c2a0d0777da1baa3907a957edc62433f65ad25cb4383190b20390,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657336334630107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-254cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb9ed10c9dfa9f82fa319eec929efc17c724147ce4ddb13fff131efd549474,PodSandboxId:ff00e43e463e38e4145902c004d052b6a2bcc839284155c096edb200afb06d1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657336327637917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5128eea6-234c-4aea-a9b7-835e840a31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c6b70d58c277e3b0387e086c84726ddcc3a03ccf7b66d2e89d918282324a2e,PodSandboxId:f09e9cfc17c5f6ebfd6f1ca8254a7fbd68a9380935213f14e0c6b2da173fdd82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335837826893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nm28w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fe4d0-1869-49ec-a281-18119a2ad26b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2d721dacbaca5a99bee7fbf879baa4daefb16cb3958142bc5caf2adb228366,PodSandboxId:ebd5ed6cc8e2e1f5024c47dc25d579cfae1ccd301271f7a26dd69dac669d8f67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335766000146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k5qnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf05f7e2-29de-4437-b182-5
3cd65350631,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17baf91a2a7f6358ae63f23dc0895492f2dd397ad7cff6a73b4c8c365f5ad9d,PodSandboxId:5f7bba7b439236b30b841000e022540071e032467e2e35ae33f0ccb9c3d08914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657323926969931
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d96126e303d8ee1f33f434b36ab0933,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d593b5b514c6759744bb5c123d33712566a2bc4944e019c89d91d768832a5f,PodSandboxId:bd8f0f711bacfc15386fb43b22c3fae23cfd42ce00ab99c5f724ac451ea5ddd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657323944015362,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fb3d7de1a23f009227be1c8d40c928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d572687dae0896402a546d4f4dbe24e379b932f68c3e0b3a3c3f8af35ba212c,PodSandboxId:88d5d70ea9c84cadc596ff883126d26cb63ab7e1c27ccc4b824d9132f1606142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657323897457621,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edcb95c4052750ecb4852e1b8a3f6476c996872cf7be8bb2b189ff0bd1bd8b2,PodSandboxId:cd3bc6dc0b59b37c6e9fa23fc31cd8430d2d7a7cc7a06f3b03ec5e1d794c97c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657323838111643,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836c94ed11c93508b4334cad9fff3a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbb4963f9ccff3d77ba5a2b01e3f98fc059d4d696e19e10bc46d45523e3b44,PodSandboxId:8722d35792d91589df21a449dd3ad27d7753ab57bafb835a3eb16ca6f2795c6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657038716131875,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b9e2e713-bb4d-4115-8f9f-45791042333f name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.335018978Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f63bb48-37be-4cef-8831-fc061f06d5fb name=/runtime.v1.RuntimeService/Version
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.335089631Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f63bb48-37be-4cef-8831-fc061f06d5fb name=/runtime.v1.RuntimeService/Version
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.336082777Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba7257ab-7433-4e7c-a37d-ac34f8fec58a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.336554102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657885336529736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba7257ab-7433-4e7c-a37d-ac34f8fec58a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.337134244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37db6a5e-64d1-4ab4-b9b2-fa5591332637 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.337235912Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37db6a5e-64d1-4ab4-b9b2-fa5591332637 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:51:25 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:51:25.337459414Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:503e9df483a627bb3855cc575952c002326a861e96829096b407406eb5983f09,PodSandboxId:c85483bcc56c2a0d0777da1baa3907a957edc62433f65ad25cb4383190b20390,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657336334630107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-254cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb9ed10c9dfa9f82fa319eec929efc17c724147ce4ddb13fff131efd549474,PodSandboxId:ff00e43e463e38e4145902c004d052b6a2bcc839284155c096edb200afb06d1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657336327637917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5128eea6-234c-4aea-a9b7-835e840a31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c6b70d58c277e3b0387e086c84726ddcc3a03ccf7b66d2e89d918282324a2e,PodSandboxId:f09e9cfc17c5f6ebfd6f1ca8254a7fbd68a9380935213f14e0c6b2da173fdd82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335837826893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nm28w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fe4d0-1869-49ec-a281-18119a2ad26b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2d721dacbaca5a99bee7fbf879baa4daefb16cb3958142bc5caf2adb228366,PodSandboxId:ebd5ed6cc8e2e1f5024c47dc25d579cfae1ccd301271f7a26dd69dac669d8f67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335766000146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k5qnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf05f7e2-29de-4437-b182-5
3cd65350631,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17baf91a2a7f6358ae63f23dc0895492f2dd397ad7cff6a73b4c8c365f5ad9d,PodSandboxId:5f7bba7b439236b30b841000e022540071e032467e2e35ae33f0ccb9c3d08914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657323926969931
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d96126e303d8ee1f33f434b36ab0933,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d593b5b514c6759744bb5c123d33712566a2bc4944e019c89d91d768832a5f,PodSandboxId:bd8f0f711bacfc15386fb43b22c3fae23cfd42ce00ab99c5f724ac451ea5ddd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657323944015362,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fb3d7de1a23f009227be1c8d40c928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d572687dae0896402a546d4f4dbe24e379b932f68c3e0b3a3c3f8af35ba212c,PodSandboxId:88d5d70ea9c84cadc596ff883126d26cb63ab7e1c27ccc4b824d9132f1606142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657323897457621,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edcb95c4052750ecb4852e1b8a3f6476c996872cf7be8bb2b189ff0bd1bd8b2,PodSandboxId:cd3bc6dc0b59b37c6e9fa23fc31cd8430d2d7a7cc7a06f3b03ec5e1d794c97c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657323838111643,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836c94ed11c93508b4334cad9fff3a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbb4963f9ccff3d77ba5a2b01e3f98fc059d4d696e19e10bc46d45523e3b44,PodSandboxId:8722d35792d91589df21a449dd3ad27d7753ab57bafb835a3eb16ca6f2795c6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657038716131875,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37db6a5e-64d1-4ab4-b9b2-fa5591332637 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	503e9df483a62       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   c85483bcc56c2       kube-proxy-254cb
	2bbb9ed10c9df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   ff00e43e463e3       storage-provisioner
	77c6b70d58c27       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   f09e9cfc17c5f       coredns-6f6b679f8f-nm28w
	ba2d721dacbac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   ebd5ed6cc8e2e       coredns-6f6b679f8f-k5qnj
	f4d593b5b514c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   bd8f0f711bacf       etcd-default-k8s-diff-port-885666
	d17baf91a2a7f       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   5f7bba7b43923       kube-scheduler-default-k8s-diff-port-885666
	2d572687dae08       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   88d5d70ea9c84       kube-apiserver-default-k8s-diff-port-885666
	7edcb95c40527       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   cd3bc6dc0b59b       kube-controller-manager-default-k8s-diff-port-885666
	b1cbb4963f9cc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   8722d35792d91       kube-apiserver-default-k8s-diff-port-885666
	
	
	==> coredns [77c6b70d58c277e3b0387e086c84726ddcc3a03ccf7b66d2e89d918282324a2e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ba2d721dacbaca5a99bee7fbf879baa4daefb16cb3958142bc5caf2adb228366] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-885666
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-885666
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=default-k8s-diff-port-885666
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T17_42_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 17:42:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-885666
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 17:51:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 17:47:24 +0000   Wed, 14 Aug 2024 17:42:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 17:47:24 +0000   Wed, 14 Aug 2024 17:42:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 17:47:24 +0000   Wed, 14 Aug 2024 17:42:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 17:47:24 +0000   Wed, 14 Aug 2024 17:42:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.184
	  Hostname:    default-k8s-diff-port-885666
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 77f491de9fd64d2f8fc1bc7b2c4fbd7d
	  System UUID:                77f491de-9fd6-4d2f-8fc1-bc7b2c4fbd7d
	  Boot ID:                    ee6ef590-015f-4ef0-8f7e-d46cb391e6b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-k5qnj                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m11s
	  kube-system                 coredns-6f6b679f8f-nm28w                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m11s
	  kube-system                 etcd-default-k8s-diff-port-885666                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-apiserver-default-k8s-diff-port-885666             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-885666    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-proxy-254cb                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	  kube-system                 kube-scheduler-default-k8s-diff-port-885666             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 metrics-server-6867b74b74-5q86r                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m10s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m8s   kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s  kubelet          Node default-k8s-diff-port-885666 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s  kubelet          Node default-k8s-diff-port-885666 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s  kubelet          Node default-k8s-diff-port-885666 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s  node-controller  Node default-k8s-diff-port-885666 event: Registered Node default-k8s-diff-port-885666 in Controller
	
	
	==> dmesg <==
	[  +0.053045] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038602] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.808343] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.858660] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.529056] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug14 17:37] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.066165] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067431] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.188815] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.150186] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.269086] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +4.112439] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +1.991859] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +0.057932] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.516594] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.614083] kauditd_printk_skb: 85 callbacks suppressed
	[Aug14 17:42] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.187600] systemd-fstab-generator[2624]: Ignoring "noauto" option for root device
	[  +4.708149] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.350341] systemd-fstab-generator[2942]: Ignoring "noauto" option for root device
	[  +5.905194] systemd-fstab-generator[3070]: Ignoring "noauto" option for root device
	[  +0.088423] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.898027] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [f4d593b5b514c6759744bb5c123d33712566a2bc4944e019c89d91d768832a5f] <==
	{"level":"info","ts":"2024-08-14T17:42:04.242046Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.184:2380"}
	{"level":"info","ts":"2024-08-14T17:42:04.244198Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.184:2380"}
	{"level":"info","ts":"2024-08-14T17:42:04.241794Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-14T17:42:04.244524Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"bf2ced3b97aa693f","initial-advertise-peer-urls":["https://192.168.50.184:2380"],"listen-peer-urls":["https://192.168.50.184:2380"],"advertise-client-urls":["https://192.168.50.184:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.184:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-14T17:42:04.248128Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T17:42:04.475227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-14T17:42:04.475316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-14T17:42:04.475345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f received MsgPreVoteResp from bf2ced3b97aa693f at term 1"}
	{"level":"info","ts":"2024-08-14T17:42:04.475360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f became candidate at term 2"}
	{"level":"info","ts":"2024-08-14T17:42:04.475365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f received MsgVoteResp from bf2ced3b97aa693f at term 2"}
	{"level":"info","ts":"2024-08-14T17:42:04.475374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bf2ced3b97aa693f became leader at term 2"}
	{"level":"info","ts":"2024-08-14T17:42:04.475381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bf2ced3b97aa693f elected leader bf2ced3b97aa693f at term 2"}
	{"level":"info","ts":"2024-08-14T17:42:04.481303Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:42:04.485653Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"bf2ced3b97aa693f","local-member-attributes":"{Name:default-k8s-diff-port-885666 ClientURLs:[https://192.168.50.184:2379]}","request-path":"/0/members/bf2ced3b97aa693f/attributes","cluster-id":"dfaeaf2ad25a061e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T17:42:04.485897Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T17:42:04.487111Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T17:42:04.491251Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:42:04.492264Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dfaeaf2ad25a061e","local-member-id":"bf2ced3b97aa693f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:42:04.492358Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:42:04.492396Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:42:04.495088Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T17:42:04.495205Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T17:42:04.497277Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T17:42:04.497841Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:42:04.498620Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.184:2379"}
	
	
	==> kernel <==
	 17:51:25 up 14 min,  0 users,  load average: 0.28, 0.16, 0.10
	Linux default-k8s-diff-port-885666 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2d572687dae0896402a546d4f4dbe24e379b932f68c3e0b3a3c3f8af35ba212c] <==
	W0814 17:47:07.525248       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:47:07.525293       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 17:47:07.526234       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:47:07.527493       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:48:07.526464       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:48:07.526658       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0814 17:48:07.528588       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:48:07.528794       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:48:07.528950       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 17:48:07.530201       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:50:07.529217       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:50:07.529730       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0814 17:50:07.531366       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:50:07.531467       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:50:07.531524       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 17:50:07.532724       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b1cbb4963f9ccff3d77ba5a2b01e3f98fc059d4d696e19e10bc46d45523e3b44] <==
	W0814 17:41:58.729046       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.828527       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.843405       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.871904       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.880374       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.882776       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.885143       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.894518       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.944913       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.956906       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.985443       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.020001       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.021375       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.054371       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.059235       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.068929       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.070309       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.136364       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.158235       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.241657       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.298615       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.316570       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.433264       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.460771       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.807237       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7edcb95c4052750ecb4852e1b8a3f6476c996872cf7be8bb2b189ff0bd1bd8b2] <==
	E0814 17:46:13.434660       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:46:13.977056       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:46:43.441591       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:46:43.988779       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:47:13.448376       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:47:13.998027       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:47:24.572144       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-885666"
	E0814 17:47:43.455452       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:47:44.006844       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:48:10.972955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="270.981µs"
	E0814 17:48:13.462410       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:48:14.015229       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:48:23.970199       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="134.5µs"
	E0814 17:48:43.469465       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:48:44.026925       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:49:13.475877       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:49:14.034636       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:49:43.481885       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:49:44.044637       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:50:13.488696       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:50:14.052320       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:50:43.495950       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:50:44.064102       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:51:13.503792       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:51:14.073369       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [503e9df483a627bb3855cc575952c002326a861e96829096b407406eb5983f09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 17:42:16.576482       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 17:42:16.586685       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.184"]
	E0814 17:42:16.586825       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 17:42:16.621957       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 17:42:16.622042       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 17:42:16.622082       1 server_linux.go:169] "Using iptables Proxier"
	I0814 17:42:16.624893       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 17:42:16.625222       1 server.go:483] "Version info" version="v1.31.0"
	I0814 17:42:16.625251       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:42:16.626664       1 config.go:197] "Starting service config controller"
	I0814 17:42:16.626712       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 17:42:16.626732       1 config.go:104] "Starting endpoint slice config controller"
	I0814 17:42:16.626736       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 17:42:16.629569       1 config.go:326] "Starting node config controller"
	I0814 17:42:16.629595       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 17:42:16.727230       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 17:42:16.727379       1 shared_informer.go:320] Caches are synced for service config
	I0814 17:42:16.730227       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d17baf91a2a7f6358ae63f23dc0895492f2dd397ad7cff6a73b4c8c365f5ad9d] <==
	W0814 17:42:06.597927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 17:42:06.598071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:06.598302       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 17:42:06.598358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:06.598377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 17:42:06.598509       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:06.598319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 17:42:06.598626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:06.598850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 17:42:06.598940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:07.417577       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 17:42:07.417683       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 17:42:07.433705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 17:42:07.433899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:07.497430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 17:42:07.497630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:07.511714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 17:42:07.511761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:07.583309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 17:42:07.583361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:07.743341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 17:42:07.743384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:07.768782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 17:42:07.768980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0814 17:42:10.088527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 17:50:14 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:14.957465    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5q86r" podUID="849df692-9f8e-455e-b209-25801151513b"
	Aug 14 17:50:19 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:19.134305    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657819133760126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:19 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:19.134626    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657819133760126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:25 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:25.955289    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5q86r" podUID="849df692-9f8e-455e-b209-25801151513b"
	Aug 14 17:50:29 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:29.137075    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657829136634161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:29 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:29.137282    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657829136634161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:39 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:39.138965    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657839138585317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:39 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:39.139022    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657839138585317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:40 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:40.958335    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5q86r" podUID="849df692-9f8e-455e-b209-25801151513b"
	Aug 14 17:50:49 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:49.144010    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657849141411262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:49 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:49.144055    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657849141411262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:52 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:52.957794    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5q86r" podUID="849df692-9f8e-455e-b209-25801151513b"
	Aug 14 17:50:59 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:59.145691    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657859145301000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:50:59 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:50:59.145738    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657859145301000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:06 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:51:06.953725    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5q86r" podUID="849df692-9f8e-455e-b209-25801151513b"
	Aug 14 17:51:08 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:51:08.981577    2949 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 17:51:08 default-k8s-diff-port-885666 kubelet[2949]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 17:51:08 default-k8s-diff-port-885666 kubelet[2949]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 17:51:08 default-k8s-diff-port-885666 kubelet[2949]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 17:51:08 default-k8s-diff-port-885666 kubelet[2949]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 17:51:09 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:51:09.147549    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657869147211245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:09 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:51:09.147580    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657869147211245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:19 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:51:19.149033    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657879148796487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:19 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:51:19.149066    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657879148796487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:21 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:51:21.954761    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5q86r" podUID="849df692-9f8e-455e-b209-25801151513b"
	
	
	==> storage-provisioner [2bbb9ed10c9dfa9f82fa319eec929efc17c724147ce4ddb13fff131efd549474] <==
	I0814 17:42:16.445308       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 17:42:16.478240       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 17:42:16.478345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 17:42:16.492098       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 17:42:16.493070       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-885666_b4c1d616-34c5-489c-b574-4d9c19c202f2!
	I0814 17:42:16.496474       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a256549-c7e3-4b8b-b19c-b3b2b3d68570", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-885666_b4c1d616-34c5-489c-b574-4d9c19c202f2 became leader
	I0814 17:42:16.594241       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-885666_b4c1d616-34c5-489c-b574-4d9c19c202f2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-885666 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-5q86r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-885666 describe pod metrics-server-6867b74b74-5q86r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-885666 describe pod metrics-server-6867b74b74-5q86r: exit status 1 (62.640233ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-5q86r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-885666 describe pod metrics-server-6867b74b74-5q86r: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0814 17:43:13.864527   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:44:16.591709   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:44:29.459824   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:44:58.428764   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:45:39.655476   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-545149 -n no-preload-545149
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-14 17:52:11.352706666 +0000 UTC m=+6165.037989481
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-545149 -n no-preload-545149
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-545149 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-545149 logs -n 25: (1.998407876s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-984053 sudo cat                              | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo find                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo crio                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-984053                                       | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-005029 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | disable-driver-mounts-005029                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:30 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-545149             | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-309673            | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-885666  | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC | 14 Aug 24 17:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC |                     |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-545149                  | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-505584        | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC | 14 Aug 24 17:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-309673                 | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-885666       | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:42 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-505584             | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 17:33:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 17:33:46.321266   80228 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:33:46.321519   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321529   80228 out.go:304] Setting ErrFile to fd 2...
	I0814 17:33:46.321533   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321691   80228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:33:46.322185   80228 out.go:298] Setting JSON to false
	I0814 17:33:46.323102   80228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8170,"bootTime":1723648656,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:33:46.323161   80228 start.go:139] virtualization: kvm guest
	I0814 17:33:46.325361   80228 out.go:177] * [old-k8s-version-505584] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:33:46.326668   80228 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:33:46.326679   80228 notify.go:220] Checking for updates...
	I0814 17:33:46.329217   80228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:33:46.330813   80228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:33:46.332019   80228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:33:46.333264   80228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:33:46.334480   80228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:33:46.336108   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:33:46.336521   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.336564   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.351154   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0814 17:33:46.351563   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.352042   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.352061   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.352395   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.352567   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.354248   80228 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 17:33:46.355547   80228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:33:46.355834   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.355865   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.370976   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
	I0814 17:33:46.371452   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.371977   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.372008   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.372376   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.372624   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.407797   80228 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 17:33:46.408905   80228 start.go:297] selected driver: kvm2
	I0814 17:33:46.408918   80228 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.409022   80228 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:33:46.409677   80228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.409753   80228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:33:46.424801   80228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:33:46.425288   80228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:33:46.425338   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:33:46.425349   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:33:46.425396   80228 start.go:340] cluster config:
	{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.425518   80228 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.427224   80228 out.go:177] * Starting "old-k8s-version-505584" primary control-plane node in "old-k8s-version-505584" cluster
	I0814 17:33:46.428485   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:33:46.428516   80228 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:33:46.428523   80228 cache.go:56] Caching tarball of preloaded images
	I0814 17:33:46.428589   80228 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:33:46.428600   80228 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 17:33:46.428727   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:33:46.428899   80228 start.go:360] acquireMachinesLock for old-k8s-version-505584: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:33:47.579625   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:50.651557   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:56.731587   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:59.803787   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:05.883582   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:08.959564   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:15.035593   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:18.107634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:24.187624   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:27.259634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:33.339631   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:36.411675   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:42.491633   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:45.563609   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:51.643582   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:54.715620   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:00.795564   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:03.867637   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:09.947634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:13.019646   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:19.099578   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:22.171640   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:28.251634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:31.323645   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:37.403627   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:40.475635   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:46.555591   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:49.627635   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:55.707632   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:58.779532   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:04.859619   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:07.931632   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:14.011612   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:17.083624   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:23.163638   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:26.235638   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:29.240279   79521 start.go:364] duration metric: took 4m23.88398072s to acquireMachinesLock for "embed-certs-309673"
	I0814 17:36:29.240341   79521 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:36:29.240351   79521 fix.go:54] fixHost starting: 
	I0814 17:36:29.240703   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:36:29.240730   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:36:29.255901   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0814 17:36:29.256372   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:36:29.256816   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:36:29.256839   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:36:29.257153   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:36:29.257337   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:29.257518   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:36:29.259382   79521 fix.go:112] recreateIfNeeded on embed-certs-309673: state=Stopped err=<nil>
	I0814 17:36:29.259419   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	W0814 17:36:29.259583   79521 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:36:29.261931   79521 out.go:177] * Restarting existing kvm2 VM for "embed-certs-309673" ...
	I0814 17:36:29.263301   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Start
	I0814 17:36:29.263487   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring networks are active...
	I0814 17:36:29.264251   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring network default is active
	I0814 17:36:29.264797   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring network mk-embed-certs-309673 is active
	I0814 17:36:29.265331   79521 main.go:141] libmachine: (embed-certs-309673) Getting domain xml...
	I0814 17:36:29.266055   79521 main.go:141] libmachine: (embed-certs-309673) Creating domain...
	I0814 17:36:29.237663   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:36:29.237704   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:36:29.238088   79367 buildroot.go:166] provisioning hostname "no-preload-545149"
	I0814 17:36:29.238131   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:36:29.238337   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:36:29.240159   79367 machine.go:97] duration metric: took 4m37.421920583s to provisionDockerMachine
	I0814 17:36:29.240195   79367 fix.go:56] duration metric: took 4m37.443181113s for fixHost
	I0814 17:36:29.240202   79367 start.go:83] releasing machines lock for "no-preload-545149", held for 4m37.443414836s
	W0814 17:36:29.240223   79367 start.go:714] error starting host: provision: host is not running
	W0814 17:36:29.240348   79367 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0814 17:36:29.240358   79367 start.go:729] Will try again in 5 seconds ...
	I0814 17:36:30.482377   79521 main.go:141] libmachine: (embed-certs-309673) Waiting to get IP...
	I0814 17:36:30.483405   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:30.483750   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:30.483837   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:30.483729   80776 retry.go:31] will retry after 224.900105ms: waiting for machine to come up
	I0814 17:36:30.710259   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:30.710718   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:30.710748   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:30.710679   80776 retry.go:31] will retry after 322.892012ms: waiting for machine to come up
	I0814 17:36:31.035358   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.035807   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.035835   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.035757   80776 retry.go:31] will retry after 374.226901ms: waiting for machine to come up
	I0814 17:36:31.411228   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.411783   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.411813   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.411717   80776 retry.go:31] will retry after 472.149905ms: waiting for machine to come up
	I0814 17:36:31.885265   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.885787   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.885810   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.885757   80776 retry.go:31] will retry after 676.063343ms: waiting for machine to come up
	I0814 17:36:32.563206   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:32.563711   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:32.563745   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:32.563658   80776 retry.go:31] will retry after 904.634039ms: waiting for machine to come up
	I0814 17:36:33.469832   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:33.470255   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:33.470278   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:33.470206   80776 retry.go:31] will retry after 1.132974911s: waiting for machine to come up
	I0814 17:36:34.605040   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:34.605542   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:34.605576   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:34.605498   80776 retry.go:31] will retry after 1.210457498s: waiting for machine to come up
	I0814 17:36:34.242590   79367 start.go:360] acquireMachinesLock for no-preload-545149: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:36:35.817809   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:35.818152   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:35.818177   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:35.818111   80776 retry.go:31] will retry after 1.275236618s: waiting for machine to come up
	I0814 17:36:37.095551   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:37.095975   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:37.096001   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:37.095937   80776 retry.go:31] will retry after 1.716925001s: waiting for machine to come up
	I0814 17:36:38.814927   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:38.815916   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:38.815943   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:38.815864   80776 retry.go:31] will retry after 2.040428036s: waiting for machine to come up
	I0814 17:36:40.858640   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:40.859157   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:40.859188   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:40.859108   80776 retry.go:31] will retry after 2.259949864s: waiting for machine to come up
	I0814 17:36:43.120436   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:43.120913   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:43.120939   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:43.120879   80776 retry.go:31] will retry after 3.64334808s: waiting for machine to come up
	I0814 17:36:47.975977   79871 start.go:364] duration metric: took 3m52.18367446s to acquireMachinesLock for "default-k8s-diff-port-885666"
	I0814 17:36:47.976049   79871 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:36:47.976064   79871 fix.go:54] fixHost starting: 
	I0814 17:36:47.976457   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:36:47.976492   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:36:47.993513   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0814 17:36:47.993940   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:36:47.994480   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:36:47.994504   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:36:47.994815   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:36:47.995005   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:36:47.995181   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:36:47.996716   79871 fix.go:112] recreateIfNeeded on default-k8s-diff-port-885666: state=Stopped err=<nil>
	I0814 17:36:47.996755   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	W0814 17:36:47.996923   79871 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:36:47.998967   79871 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-885666" ...
	I0814 17:36:46.766908   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.767458   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has current primary IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.767500   79521 main.go:141] libmachine: (embed-certs-309673) Found IP for machine: 192.168.61.2
	I0814 17:36:46.767516   79521 main.go:141] libmachine: (embed-certs-309673) Reserving static IP address...
	I0814 17:36:46.767974   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "embed-certs-309673", mac: "52:54:00:ed:61:4e", ip: "192.168.61.2"} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.767993   79521 main.go:141] libmachine: (embed-certs-309673) Reserved static IP address: 192.168.61.2
	I0814 17:36:46.768006   79521 main.go:141] libmachine: (embed-certs-309673) DBG | skip adding static IP to network mk-embed-certs-309673 - found existing host DHCP lease matching {name: "embed-certs-309673", mac: "52:54:00:ed:61:4e", ip: "192.168.61.2"}
	I0814 17:36:46.768017   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Getting to WaitForSSH function...
	I0814 17:36:46.768023   79521 main.go:141] libmachine: (embed-certs-309673) Waiting for SSH to be available...
	I0814 17:36:46.770187   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.770517   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.770548   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.770612   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Using SSH client type: external
	I0814 17:36:46.770643   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa (-rw-------)
	I0814 17:36:46.770672   79521 main.go:141] libmachine: (embed-certs-309673) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:36:46.770697   79521 main.go:141] libmachine: (embed-certs-309673) DBG | About to run SSH command:
	I0814 17:36:46.770703   79521 main.go:141] libmachine: (embed-certs-309673) DBG | exit 0
	I0814 17:36:46.895078   79521 main.go:141] libmachine: (embed-certs-309673) DBG | SSH cmd err, output: <nil>: 
	I0814 17:36:46.895444   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetConfigRaw
	I0814 17:36:46.896033   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:46.898715   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.899085   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.899117   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.899434   79521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/config.json ...
	I0814 17:36:46.899701   79521 machine.go:94] provisionDockerMachine start ...
	I0814 17:36:46.899723   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:46.899906   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:46.901985   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.902244   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.902268   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.902398   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:46.902564   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:46.902707   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:46.902829   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:46.902966   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:46.903201   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:46.903213   79521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:36:47.007289   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:36:47.007313   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.007589   79521 buildroot.go:166] provisioning hostname "embed-certs-309673"
	I0814 17:36:47.007608   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.007802   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.010311   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.010631   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.010670   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.010805   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.010956   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.011067   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.011160   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.011269   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.011455   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.011467   79521 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-309673 && echo "embed-certs-309673" | sudo tee /etc/hostname
	I0814 17:36:47.128575   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-309673
	
	I0814 17:36:47.128601   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.131125   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.131464   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.131493   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.131655   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.131970   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.132146   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.132286   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.132457   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.132614   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.132630   79521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-309673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-309673/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-309673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:36:47.247426   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:36:47.247469   79521 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:36:47.247486   79521 buildroot.go:174] setting up certificates
	I0814 17:36:47.247496   79521 provision.go:84] configureAuth start
	I0814 17:36:47.247506   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.247768   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:47.250616   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.250993   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.251018   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.251148   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.253149   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.253436   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.253465   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.253551   79521 provision.go:143] copyHostCerts
	I0814 17:36:47.253616   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:36:47.253628   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:36:47.253703   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:36:47.253817   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:36:47.253835   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:36:47.253875   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:36:47.253952   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:36:47.253962   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:36:47.253994   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:36:47.254060   79521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.embed-certs-309673 san=[127.0.0.1 192.168.61.2 embed-certs-309673 localhost minikube]
	I0814 17:36:47.338831   79521 provision.go:177] copyRemoteCerts
	I0814 17:36:47.338892   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:36:47.338921   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.341582   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.341897   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.341915   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.342053   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.342237   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.342374   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.342497   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.424777   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:36:47.446682   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:36:47.467672   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:36:47.488423   79521 provision.go:87] duration metric: took 240.914172ms to configureAuth
	I0814 17:36:47.488453   79521 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:36:47.488645   79521 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:36:47.488733   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.491453   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.491793   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.491816   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.492028   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.492216   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.492351   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.492479   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.492716   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.492909   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.492931   79521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:36:47.746210   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:36:47.746248   79521 machine.go:97] duration metric: took 846.530779ms to provisionDockerMachine
	I0814 17:36:47.746260   79521 start.go:293] postStartSetup for "embed-certs-309673" (driver="kvm2")
	I0814 17:36:47.746274   79521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:36:47.746297   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.746659   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:36:47.746694   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.749342   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.749674   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.749702   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.749831   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.750004   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.750126   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.750272   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.833279   79521 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:36:47.837076   79521 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:36:47.837099   79521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:36:47.837183   79521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:36:47.837269   79521 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:36:47.837387   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:36:47.845640   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:36:47.866978   79521 start.go:296] duration metric: took 120.70557ms for postStartSetup
	I0814 17:36:47.867012   79521 fix.go:56] duration metric: took 18.626661733s for fixHost
	I0814 17:36:47.867030   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.869687   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.870016   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.870046   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.870220   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.870399   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.870660   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.870827   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.870999   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.871209   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.871221   79521 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:36:47.975817   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657007.950271601
	
	I0814 17:36:47.975848   79521 fix.go:216] guest clock: 1723657007.950271601
	I0814 17:36:47.975860   79521 fix.go:229] Guest: 2024-08-14 17:36:47.950271601 +0000 UTC Remote: 2024-08-14 17:36:47.867016056 +0000 UTC m=+282.648397849 (delta=83.255545ms)
	I0814 17:36:47.975889   79521 fix.go:200] guest clock delta is within tolerance: 83.255545ms
	I0814 17:36:47.975896   79521 start.go:83] releasing machines lock for "embed-certs-309673", held for 18.735575335s
	I0814 17:36:47.975931   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.976213   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:47.978934   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.979457   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.979483   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.979625   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980134   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980303   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980382   79521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:36:47.980428   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.980574   79521 ssh_runner.go:195] Run: cat /version.json
	I0814 17:36:47.980603   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.983247   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983557   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983649   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.983687   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983828   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.984032   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.984042   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.984063   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.984183   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.984232   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.984320   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.984412   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.984467   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.984608   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:48.064891   79521 ssh_runner.go:195] Run: systemctl --version
	I0814 17:36:48.101403   79521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:36:48.239841   79521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:36:48.245634   79521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:36:48.245718   79521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:36:48.260517   79521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:36:48.260543   79521 start.go:495] detecting cgroup driver to use...
	I0814 17:36:48.260597   79521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:36:48.275003   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:36:48.290316   79521 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:36:48.290376   79521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:36:48.304351   79521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:36:48.320954   79521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:36:48.434176   79521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:36:48.582137   79521 docker.go:233] disabling docker service ...
	I0814 17:36:48.582217   79521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:36:48.595784   79521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:36:48.608379   79521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:36:48.735500   79521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:36:48.876194   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:36:48.891826   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:36:48.910820   79521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:36:48.910887   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.921125   79521 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:36:48.921198   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.931615   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.942779   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.953124   79521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:36:48.963454   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.974457   79521 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.991583   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:49.006059   79521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:36:49.015586   79521 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:36:49.015649   79521 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:36:49.028742   79521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:36:49.038126   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:36:49.155387   79521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:36:49.318598   79521 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:36:49.318679   79521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:36:49.323575   79521 start.go:563] Will wait 60s for crictl version
	I0814 17:36:49.323636   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:36:49.327233   79521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:36:49.369724   79521 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:36:49.369814   79521 ssh_runner.go:195] Run: crio --version
	I0814 17:36:49.399516   79521 ssh_runner.go:195] Run: crio --version
	I0814 17:36:49.431594   79521 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:36:49.432940   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:49.435776   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:49.436168   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:49.436199   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:49.436447   79521 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 17:36:49.440606   79521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:36:49.453159   79521 kubeadm.go:883] updating cluster {Name:embed-certs-309673 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:36:49.453272   79521 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:36:49.453311   79521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:36:49.486635   79521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:36:49.486708   79521 ssh_runner.go:195] Run: which lz4
	I0814 17:36:49.490626   79521 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:36:49.494822   79521 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:36:49.494852   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:36:48.000271   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Start
	I0814 17:36:48.000453   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring networks are active...
	I0814 17:36:48.001246   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring network default is active
	I0814 17:36:48.001621   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring network mk-default-k8s-diff-port-885666 is active
	I0814 17:36:48.002158   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Getting domain xml...
	I0814 17:36:48.002982   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Creating domain...
	I0814 17:36:49.272729   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting to get IP...
	I0814 17:36:49.273726   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.274182   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.274273   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.274157   80921 retry.go:31] will retry after 208.258845ms: waiting for machine to come up
	I0814 17:36:49.483781   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.484251   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.484278   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.484211   80921 retry.go:31] will retry after 318.193974ms: waiting for machine to come up
	I0814 17:36:49.803815   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.804311   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.804339   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.804277   80921 retry.go:31] will retry after 426.023242ms: waiting for machine to come up
	I0814 17:36:50.232060   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.232610   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.232646   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:50.232519   80921 retry.go:31] will retry after 534.392065ms: waiting for machine to come up
	I0814 17:36:50.745416   79521 crio.go:462] duration metric: took 1.254815826s to copy over tarball
	I0814 17:36:50.745515   79521 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:36:52.865848   79521 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.120299454s)
	I0814 17:36:52.865879   79521 crio.go:469] duration metric: took 2.120437156s to extract the tarball
	I0814 17:36:52.865887   79521 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:36:52.901808   79521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:36:52.946366   79521 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:36:52.946386   79521 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:36:52.946394   79521 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.31.0 crio true true} ...
	I0814 17:36:52.946492   79521 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-309673 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:36:52.946556   79521 ssh_runner.go:195] Run: crio config
	I0814 17:36:52.992520   79521 cni.go:84] Creating CNI manager for ""
	I0814 17:36:52.992541   79521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:36:52.992553   79521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:36:52.992577   79521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-309673 NodeName:embed-certs-309673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:36:52.992740   79521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-309673"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:36:52.992811   79521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:36:53.002460   79521 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:36:53.002539   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:36:53.011167   79521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 17:36:53.026436   79521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:36:53.041728   79521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0814 17:36:53.059102   79521 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0814 17:36:53.062728   79521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:36:53.073803   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:36:53.200870   79521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:36:53.217448   79521 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673 for IP: 192.168.61.2
	I0814 17:36:53.217472   79521 certs.go:194] generating shared ca certs ...
	I0814 17:36:53.217495   79521 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:36:53.217694   79521 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:36:53.217755   79521 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:36:53.217766   79521 certs.go:256] generating profile certs ...
	I0814 17:36:53.217876   79521 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/client.key
	I0814 17:36:53.217961   79521 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.key.83510bb8
	I0814 17:36:53.218034   79521 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.key
	I0814 17:36:53.218202   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:36:53.218248   79521 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:36:53.218272   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:36:53.218309   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:36:53.218343   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:36:53.218380   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:36:53.218447   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:36:53.219187   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:36:53.273437   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:36:53.307566   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:36:53.330107   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:36:53.360324   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0814 17:36:53.386974   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 17:36:53.409537   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:36:53.433873   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:36:53.456408   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:36:53.478233   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:36:53.500264   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:36:53.522440   79521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:36:53.538977   79521 ssh_runner.go:195] Run: openssl version
	I0814 17:36:53.544866   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:36:53.555085   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.559340   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.559399   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.565106   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:36:53.575561   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:36:53.585605   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.589838   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.589911   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.595165   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:36:53.604934   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:36:53.615153   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.619362   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.619435   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.624949   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:36:53.635459   79521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:36:53.639814   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:36:53.645419   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:36:53.651013   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:36:53.657004   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:36:53.662540   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:36:53.668187   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:36:53.673762   79521 kubeadm.go:392] StartCluster: {Name:embed-certs-309673 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:36:53.673867   79521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:36:53.673930   79521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:36:53.709404   79521 cri.go:89] found id: ""
	I0814 17:36:53.709490   79521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:36:53.719041   79521 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:36:53.719068   79521 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:36:53.719123   79521 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:36:53.728077   79521 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:36:53.729030   79521 kubeconfig.go:125] found "embed-certs-309673" server: "https://192.168.61.2:8443"
	I0814 17:36:53.730943   79521 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:36:53.739841   79521 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0814 17:36:53.739872   79521 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:36:53.739886   79521 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:36:53.739947   79521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:36:53.777400   79521 cri.go:89] found id: ""
	I0814 17:36:53.777476   79521 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:36:53.792838   79521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:36:53.802189   79521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:36:53.802223   79521 kubeadm.go:157] found existing configuration files:
	
	I0814 17:36:53.802278   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:36:53.813778   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:36:53.813854   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:36:53.825962   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:36:53.834929   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:36:53.834987   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:36:53.846315   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:36:53.855138   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:36:53.855206   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:36:53.864109   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:36:53.872613   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:36:53.872672   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:36:53.881307   79521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:36:53.890148   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.002103   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.664940   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.868608   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.932317   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:55.006430   79521 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:36:55.006523   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:50.768099   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.768599   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.768629   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:50.768554   80921 retry.go:31] will retry after 487.741283ms: waiting for machine to come up
	I0814 17:36:51.258499   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:51.259020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:51.259047   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:51.258975   80921 retry.go:31] will retry after 831.435484ms: waiting for machine to come up
	I0814 17:36:52.091900   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:52.092297   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:52.092351   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:52.092249   80921 retry.go:31] will retry after 1.067858402s: waiting for machine to come up
	I0814 17:36:53.161928   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:53.162393   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:53.162449   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:53.162366   80921 retry.go:31] will retry after 1.33971606s: waiting for machine to come up
	I0814 17:36:54.503810   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:54.504184   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:54.504214   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:54.504121   80921 retry.go:31] will retry after 1.4882184s: waiting for machine to come up
	I0814 17:36:55.506634   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:56.007367   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:56.507265   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:57.007343   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:57.026436   79521 api_server.go:72] duration metric: took 2.020005984s to wait for apiserver process to appear ...
	I0814 17:36:57.026471   79521 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:36:57.026496   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:36:55.994824   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:55.995255   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:55.995283   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:55.995206   80921 retry.go:31] will retry after 1.65461779s: waiting for machine to come up
	I0814 17:36:57.651449   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:57.651837   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:57.651867   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:57.651794   80921 retry.go:31] will retry after 2.38071296s: waiting for machine to come up
	I0814 17:37:00.033719   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:00.034261   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:37:00.034290   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:37:00.034204   80921 retry.go:31] will retry after 3.476533232s: waiting for machine to come up
	I0814 17:37:00.329636   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:00.329674   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:00.329689   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:00.357287   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:00.357334   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:00.527150   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:00.536020   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:00.536058   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:01.026558   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:01.034241   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:01.034271   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:01.526814   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:01.536226   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:01.536267   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:02.026791   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:02.031068   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 200:
	ok
	I0814 17:37:02.037240   79521 api_server.go:141] control plane version: v1.31.0
	I0814 17:37:02.037266   79521 api_server.go:131] duration metric: took 5.010786446s to wait for apiserver health ...
	I0814 17:37:02.037278   79521 cni.go:84] Creating CNI manager for ""
	I0814 17:37:02.037286   79521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:02.039248   79521 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:37:02.040543   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:37:02.050754   79521 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:37:02.067333   79521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:37:02.076082   79521 system_pods.go:59] 8 kube-system pods found
	I0814 17:37:02.076115   79521 system_pods.go:61] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:37:02.076130   79521 system_pods.go:61] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:37:02.076139   79521 system_pods.go:61] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:37:02.076178   79521 system_pods.go:61] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:37:02.076198   79521 system_pods.go:61] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:37:02.076218   79521 system_pods.go:61] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:37:02.076233   79521 system_pods.go:61] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:37:02.076242   79521 system_pods.go:61] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:37:02.076253   79521 system_pods.go:74] duration metric: took 8.901356ms to wait for pod list to return data ...
	I0814 17:37:02.076266   79521 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:37:02.080064   79521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:37:02.080087   79521 node_conditions.go:123] node cpu capacity is 2
	I0814 17:37:02.080101   79521 node_conditions.go:105] duration metric: took 3.829329ms to run NodePressure ...
	I0814 17:37:02.080121   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:02.359163   79521 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:37:02.368689   79521 kubeadm.go:739] kubelet initialised
	I0814 17:37:02.368717   79521 kubeadm.go:740] duration metric: took 9.524301ms waiting for restarted kubelet to initialise ...
	I0814 17:37:02.368728   79521 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:02.376056   79521 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.381317   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.381347   79521 pod_ready.go:81] duration metric: took 5.262062ms for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.381359   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.381370   79521 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.386799   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "etcd-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.386822   79521 pod_ready.go:81] duration metric: took 5.440585ms for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.386832   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "etcd-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.386838   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.392829   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.392853   79521 pod_ready.go:81] duration metric: took 6.003762ms for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.392864   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.392874   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.470943   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.470975   79521 pod_ready.go:81] duration metric: took 78.089715ms for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.470984   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.470996   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.870134   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-proxy-z8x9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.870163   79521 pod_ready.go:81] duration metric: took 399.157385ms for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.870175   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-proxy-z8x9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.870183   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:03.270805   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.270837   79521 pod_ready.go:81] duration metric: took 400.647029ms for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:03.270848   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.270856   79521 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:03.671023   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.671058   79521 pod_ready.go:81] duration metric: took 400.191147ms for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:03.671070   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.671079   79521 pod_ready.go:38] duration metric: took 1.302340033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:03.671098   79521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:37:03.683676   79521 ops.go:34] apiserver oom_adj: -16
	I0814 17:37:03.683701   79521 kubeadm.go:597] duration metric: took 9.964625256s to restartPrimaryControlPlane
	I0814 17:37:03.683712   79521 kubeadm.go:394] duration metric: took 10.009956133s to StartCluster
	I0814 17:37:03.683729   79521 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:03.683809   79521 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:03.685474   79521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:03.685708   79521 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:37:03.685766   79521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:37:03.685850   79521 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-309673"
	I0814 17:37:03.685862   79521 addons.go:69] Setting default-storageclass=true in profile "embed-certs-309673"
	I0814 17:37:03.685900   79521 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-309673"
	I0814 17:37:03.685907   79521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-309673"
	W0814 17:37:03.685911   79521 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:37:03.685933   79521 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:03.685933   79521 addons.go:69] Setting metrics-server=true in profile "embed-certs-309673"
	I0814 17:37:03.685988   79521 addons.go:234] Setting addon metrics-server=true in "embed-certs-309673"
	W0814 17:37:03.686006   79521 addons.go:243] addon metrics-server should already be in state true
	I0814 17:37:03.685945   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.686076   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.686284   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686362   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.686391   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686422   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.686482   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686538   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.687598   79521 out.go:177] * Verifying Kubernetes components...
	I0814 17:37:03.688995   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:03.701610   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I0814 17:37:03.702174   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.702789   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.702817   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.703223   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.703682   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.704077   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0814 17:37:03.704508   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.704864   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0814 17:37:03.705141   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.705154   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.705224   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.705473   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.705656   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.705670   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.706806   79521 addons.go:234] Setting addon default-storageclass=true in "embed-certs-309673"
	W0814 17:37:03.706824   79521 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:37:03.706851   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.707093   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.707112   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.707420   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.707536   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.707584   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.708017   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.708079   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.722383   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0814 17:37:03.722779   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.723288   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.723307   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.728799   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0814 17:37:03.728839   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38781
	I0814 17:37:03.728928   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.729426   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.729495   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.729776   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.729809   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.729951   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.729951   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.729967   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.729973   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.730360   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.730371   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.730698   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.730749   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.732979   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.733596   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.735250   79521 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:03.735262   79521 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:37:03.736576   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:37:03.736593   79521 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:37:03.736607   79521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:37:03.736612   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.736620   79521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:37:03.736637   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.740008   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740123   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740491   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.740558   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740676   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.740819   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.740842   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740872   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.740994   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.741120   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.741160   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.741527   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.741692   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.741817   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.749144   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0814 17:37:03.749482   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.749914   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.749929   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.750267   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.750467   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.752107   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.752325   79521 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:37:03.752339   79521 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:37:03.752360   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.754559   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.754845   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.754859   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.755073   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.755247   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.755402   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.755529   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.877535   79521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:03.897022   79521 node_ready.go:35] waiting up to 6m0s for node "embed-certs-309673" to be "Ready" ...
	I0814 17:37:03.951512   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:37:03.988066   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:37:03.988085   79521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:37:04.014925   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:37:04.025506   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:37:04.025531   79521 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:37:04.072457   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:37:04.072480   79521 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:37:04.104804   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:37:05.067867   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.116315804s)
	I0814 17:37:05.067888   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.052939793s)
	I0814 17:37:05.067925   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.067935   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068000   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068023   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068241   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068322   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068336   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068345   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068364   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068454   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068485   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068497   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068505   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068518   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068795   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068815   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068823   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068830   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068872   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068905   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.087716   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.087746   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.088086   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.088106   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.113388   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.008529856s)
	I0814 17:37:05.113441   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.113458   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.113736   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.113787   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.113800   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.113812   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.113823   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.114057   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.114071   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.114081   79521 addons.go:475] Verifying addon metrics-server=true in "embed-certs-309673"
	I0814 17:37:05.114163   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.116443   79521 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 17:37:05.118087   79521 addons.go:510] duration metric: took 1.432323959s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 17:37:03.512364   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:03.512842   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:37:03.512880   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:37:03.512785   80921 retry.go:31] will retry after 4.358649621s: waiting for machine to come up
	I0814 17:37:09.324026   80228 start.go:364] duration metric: took 3m22.895078586s to acquireMachinesLock for "old-k8s-version-505584"
	I0814 17:37:09.324085   80228 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:09.324101   80228 fix.go:54] fixHost starting: 
	I0814 17:37:09.324533   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:09.324575   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:09.344085   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
	I0814 17:37:09.344490   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:09.344980   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:37:09.345006   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:09.345416   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:09.345674   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:09.345842   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetState
	I0814 17:37:09.347489   80228 fix.go:112] recreateIfNeeded on old-k8s-version-505584: state=Stopped err=<nil>
	I0814 17:37:09.347511   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	W0814 17:37:09.347696   80228 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:09.349747   80228 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-505584" ...
	I0814 17:37:05.901013   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:07.901054   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:07.876377   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.876820   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has current primary IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.876845   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Found IP for machine: 192.168.50.184
	I0814 17:37:07.876857   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Reserving static IP address...
	I0814 17:37:07.877281   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-885666", mac: "52:54:00:f8:cc:3c", ip: "192.168.50.184"} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:07.877300   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Reserved static IP address: 192.168.50.184
	I0814 17:37:07.877320   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | skip adding static IP to network mk-default-k8s-diff-port-885666 - found existing host DHCP lease matching {name: "default-k8s-diff-port-885666", mac: "52:54:00:f8:cc:3c", ip: "192.168.50.184"}
	I0814 17:37:07.877339   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Getting to WaitForSSH function...
	I0814 17:37:07.877355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for SSH to be available...
	I0814 17:37:07.879843   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.880200   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:07.880242   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.880419   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Using SSH client type: external
	I0814 17:37:07.880445   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa (-rw-------)
	I0814 17:37:07.880496   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:07.880517   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | About to run SSH command:
	I0814 17:37:07.880549   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | exit 0
	I0814 17:37:08.007553   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:08.007929   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetConfigRaw
	I0814 17:37:08.009171   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:08.012358   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.012772   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.012804   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.013076   79871 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/config.json ...
	I0814 17:37:08.013284   79871 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:08.013310   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:08.013579   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.015965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.016325   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.016363   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.016491   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.016680   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.016873   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.017004   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.017140   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.017354   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.017376   79871 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:08.132369   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:08.132404   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.132657   79871 buildroot.go:166] provisioning hostname "default-k8s-diff-port-885666"
	I0814 17:37:08.132695   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.132906   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.136230   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.136669   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.136696   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.136937   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.137163   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.137350   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.137500   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.137672   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.137878   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.137900   79871 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-885666 && echo "default-k8s-diff-port-885666" | sudo tee /etc/hostname
	I0814 17:37:08.273593   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-885666
	
	I0814 17:37:08.273626   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.276470   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.276830   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.276862   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.277137   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.277382   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.277547   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.277713   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.277855   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.278052   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.278072   79871 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-885666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-885666/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-885666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:08.401522   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:08.401556   79871 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:08.401602   79871 buildroot.go:174] setting up certificates
	I0814 17:37:08.401626   79871 provision.go:84] configureAuth start
	I0814 17:37:08.401650   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.401963   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:08.404855   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.405251   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.405285   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.405521   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.407826   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.408338   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.408371   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.408515   79871 provision.go:143] copyHostCerts
	I0814 17:37:08.408583   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:08.408597   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:08.408681   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:08.408812   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:08.408823   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:08.408861   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:08.408947   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:08.408956   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:08.408984   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:08.409064   79871 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-885666 san=[127.0.0.1 192.168.50.184 default-k8s-diff-port-885666 localhost minikube]
	I0814 17:37:08.613459   79871 provision.go:177] copyRemoteCerts
	I0814 17:37:08.613530   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:08.613575   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.616704   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.617044   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.617072   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.617324   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.617515   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.617698   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.617844   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:08.705505   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:08.728835   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0814 17:37:08.751995   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:08.774577   79871 provision.go:87] duration metric: took 372.933752ms to configureAuth
	I0814 17:37:08.774609   79871 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:08.774812   79871 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:08.774880   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.777840   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.778235   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.778260   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.778527   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.778752   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.778899   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.779020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.779162   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.779437   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.779458   79871 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:09.055900   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:09.055927   79871 machine.go:97] duration metric: took 1.04262996s to provisionDockerMachine
	I0814 17:37:09.055943   79871 start.go:293] postStartSetup for "default-k8s-diff-port-885666" (driver="kvm2")
	I0814 17:37:09.055957   79871 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:09.055982   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.056325   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:09.056355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.059396   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.059853   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.059888   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.060064   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.060280   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.060558   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.060745   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.150649   79871 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:09.155263   79871 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:09.155295   79871 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:09.155400   79871 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:09.155500   79871 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:09.155623   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:09.167051   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:09.197223   79871 start.go:296] duration metric: took 141.264897ms for postStartSetup
	I0814 17:37:09.197324   79871 fix.go:56] duration metric: took 21.221265818s for fixHost
	I0814 17:37:09.197356   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.201388   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.201965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.202011   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.202109   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.202354   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.202569   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.202800   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.203010   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:09.203196   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:09.203209   79871 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:09.323868   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657029.302975780
	
	I0814 17:37:09.323892   79871 fix.go:216] guest clock: 1723657029.302975780
	I0814 17:37:09.323900   79871 fix.go:229] Guest: 2024-08-14 17:37:09.30297578 +0000 UTC Remote: 2024-08-14 17:37:09.197335302 +0000 UTC m=+253.546385360 (delta=105.640478ms)
	I0814 17:37:09.323918   79871 fix.go:200] guest clock delta is within tolerance: 105.640478ms
	I0814 17:37:09.323923   79871 start.go:83] releasing machines lock for "default-k8s-diff-port-885666", held for 21.347903434s
	I0814 17:37:09.323948   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.324209   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:09.327260   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.327802   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.327833   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.327993   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328500   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328727   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328814   79871 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:09.328862   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.328955   79871 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:09.328972   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.331813   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332081   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332233   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.332274   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332365   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.332490   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.332512   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332555   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.332669   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.332761   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.332824   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.332882   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.332926   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.333021   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.416041   79871 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:09.456024   79871 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:09.604623   79871 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:09.610562   79871 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:09.610624   79871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:09.627298   79871 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:09.627344   79871 start.go:495] detecting cgroup driver to use...
	I0814 17:37:09.627418   79871 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:09.648212   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:09.666047   79871 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:09.666107   79871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:09.681875   79871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:09.695920   79871 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:09.824502   79871 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:09.979561   79871 docker.go:233] disabling docker service ...
	I0814 17:37:09.979658   79871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:09.996877   79871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:10.014264   79871 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:10.166653   79871 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:10.288261   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:10.301868   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:10.320716   79871 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:37:10.320788   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.331099   79871 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:10.331158   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.342841   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.353762   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.364604   79871 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:10.376521   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.386787   79871 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.406713   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.418047   79871 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:10.428368   79871 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:10.428433   79871 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:10.442759   79871 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:10.452993   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:10.563097   79871 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:10.716953   79871 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:10.717031   79871 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:10.722685   79871 start.go:563] Will wait 60s for crictl version
	I0814 17:37:10.722759   79871 ssh_runner.go:195] Run: which crictl
	I0814 17:37:10.726621   79871 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:10.764534   79871 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:10.764628   79871 ssh_runner.go:195] Run: crio --version
	I0814 17:37:10.791513   79871 ssh_runner.go:195] Run: crio --version
	I0814 17:37:10.822380   79871 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:37:09.351136   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .Start
	I0814 17:37:09.351338   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring networks are active...
	I0814 17:37:09.352075   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network default is active
	I0814 17:37:09.352333   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network mk-old-k8s-version-505584 is active
	I0814 17:37:09.352701   80228 main.go:141] libmachine: (old-k8s-version-505584) Getting domain xml...
	I0814 17:37:09.353363   80228 main.go:141] libmachine: (old-k8s-version-505584) Creating domain...
	I0814 17:37:10.664390   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting to get IP...
	I0814 17:37:10.665484   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.665915   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.665980   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.665888   81116 retry.go:31] will retry after 285.047327ms: waiting for machine to come up
	I0814 17:37:10.952552   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.953009   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.953036   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.952973   81116 retry.go:31] will retry after 281.728141ms: waiting for machine to come up
	I0814 17:37:11.236576   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.237153   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.237192   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.237079   81116 retry.go:31] will retry after 341.673836ms: waiting for machine to come up
	I0814 17:37:10.401790   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:11.400713   79521 node_ready.go:49] node "embed-certs-309673" has status "Ready":"True"
	I0814 17:37:11.400742   79521 node_ready.go:38] duration metric: took 7.503686271s for node "embed-certs-309673" to be "Ready" ...
	I0814 17:37:11.400755   79521 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:11.408217   79521 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:11.414215   79521 pod_ready.go:92] pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:11.414244   79521 pod_ready.go:81] duration metric: took 5.997997ms for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:11.414256   79521 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:13.420804   79521 pod_ready.go:102] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:10.824020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:10.827965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:10.828426   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:10.828465   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:10.828807   79871 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:10.833261   79871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:10.846928   79871 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-885666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:10.847080   79871 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:37:10.847142   79871 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:10.889355   79871 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:37:10.889453   79871 ssh_runner.go:195] Run: which lz4
	I0814 17:37:10.894405   79871 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:37:10.898992   79871 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:10.899029   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:37:12.155402   79871 crio.go:462] duration metric: took 1.261016682s to copy over tarball
	I0814 17:37:12.155485   79871 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:14.344118   79871 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18859644s)
	I0814 17:37:14.344162   79871 crio.go:469] duration metric: took 2.188726026s to extract the tarball
	I0814 17:37:14.344173   79871 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:14.380317   79871 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:14.428289   79871 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:37:14.428312   79871 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:37:14.428326   79871 kubeadm.go:934] updating node { 192.168.50.184 8444 v1.31.0 crio true true} ...
	I0814 17:37:14.428422   79871 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-885666 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:14.428491   79871 ssh_runner.go:195] Run: crio config
	I0814 17:37:14.475385   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:37:14.475416   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:14.475433   79871 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:14.475464   79871 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-885666 NodeName:default-k8s-diff-port-885666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:37:14.475645   79871 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-885666"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:14.475712   79871 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:37:14.485148   79871 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:14.485206   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:14.494161   79871 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0814 17:37:14.511050   79871 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:14.526395   79871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0814 17:37:14.543061   79871 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:14.546747   79871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:14.558022   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:14.671818   79871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:14.688541   79871 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666 for IP: 192.168.50.184
	I0814 17:37:14.688583   79871 certs.go:194] generating shared ca certs ...
	I0814 17:37:14.688609   79871 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:14.688823   79871 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:14.688889   79871 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:14.688903   79871 certs.go:256] generating profile certs ...
	I0814 17:37:14.689020   79871 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/client.key
	I0814 17:37:14.689132   79871 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.key.690c84bc
	I0814 17:37:14.689182   79871 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.key
	I0814 17:37:14.689310   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:14.689367   79871 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:14.689385   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:14.689422   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:14.689453   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:14.689479   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:14.689524   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:14.690168   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:14.717906   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:14.759373   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:14.809775   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:14.834875   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 17:37:14.857860   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:14.886813   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:14.909803   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:14.935075   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:14.959759   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:14.985877   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:15.008456   79871 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:15.025602   79871 ssh_runner.go:195] Run: openssl version
	I0814 17:37:15.031392   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:15.041931   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.046475   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.046531   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.052377   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:15.063000   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:15.073463   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.078411   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.078471   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.083835   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:15.093753   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:15.103876   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.108487   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.108559   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.114104   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:15.124285   79871 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:15.128515   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:15.134223   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:15.139700   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:15.145537   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:15.151287   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:15.156766   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:15.162149   79871 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-885666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:15.162256   79871 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:15.162314   79871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:15.198745   79871 cri.go:89] found id: ""
	I0814 17:37:15.198814   79871 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:15.212198   79871 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:15.212216   79871 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:15.212256   79871 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:15.224275   79871 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:15.225218   79871 kubeconfig.go:125] found "default-k8s-diff-port-885666" server: "https://192.168.50.184:8444"
	I0814 17:37:15.227291   79871 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:15.237448   79871 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.184
	I0814 17:37:15.237494   79871 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:15.237509   79871 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:15.237563   79871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:15.281593   79871 cri.go:89] found id: ""
	I0814 17:37:15.281662   79871 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:15.298596   79871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:15.308702   79871 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:15.308723   79871 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:15.308779   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 17:37:15.318348   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:15.318409   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:15.330049   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 17:37:15.341283   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:15.341373   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:15.350584   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 17:37:15.361658   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:15.361718   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:15.373526   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 17:37:15.382360   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:15.382432   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:15.392477   79871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:15.402387   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:15.528954   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:11.580887   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.581466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.581500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.581392   81116 retry.go:31] will retry after 514.448726ms: waiting for machine to come up
	I0814 17:37:12.098137   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.098652   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.098740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.098642   81116 retry.go:31] will retry after 649.302617ms: waiting for machine to come up
	I0814 17:37:12.749349   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.749777   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.749803   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.749736   81116 retry.go:31] will retry after 897.486278ms: waiting for machine to come up
	I0814 17:37:13.649145   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:13.649666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:13.649698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:13.649621   81116 retry.go:31] will retry after 1.017213079s: waiting for machine to come up
	I0814 17:37:14.669187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:14.669715   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:14.669740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:14.669679   81116 retry.go:31] will retry after 1.014709613s: waiting for machine to come up
	I0814 17:37:15.685748   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:15.686269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:15.686299   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:15.686217   81116 retry.go:31] will retry after 1.476940798s: waiting for machine to come up
	I0814 17:37:15.422067   79521 pod_ready.go:102] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:16.421689   79521 pod_ready.go:92] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.421715   79521 pod_ready.go:81] duration metric: took 5.007451471s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.421724   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.426620   79521 pod_ready.go:92] pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.426644   79521 pod_ready.go:81] duration metric: took 4.912475ms for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.426657   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.430754   79521 pod_ready.go:92] pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.430776   79521 pod_ready.go:81] duration metric: took 4.110475ms for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.430787   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.434469   79521 pod_ready.go:92] pod "kube-proxy-z8x9t" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.434487   79521 pod_ready.go:81] duration metric: took 3.693253ms for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.434498   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.438294   79521 pod_ready.go:92] pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.438314   79521 pod_ready.go:81] duration metric: took 3.80298ms for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.438346   79521 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:18.445838   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:16.453075   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.676680   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.741803   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.831091   79871 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:16.831186   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:17.332193   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:17.831346   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:18.331620   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:18.832011   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:19.331528   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:19.348083   79871 api_server.go:72] duration metric: took 2.516990388s to wait for apiserver process to appear ...
	I0814 17:37:19.348119   79871 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:37:19.348144   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:17.164541   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:17.165093   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:17.165122   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:17.165017   81116 retry.go:31] will retry after 1.644726601s: waiting for machine to come up
	I0814 17:37:18.811628   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:18.812199   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:18.812224   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:18.812132   81116 retry.go:31] will retry after 2.740531885s: waiting for machine to come up
	I0814 17:37:21.576628   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:21.576657   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:21.576672   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:21.601355   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:21.601389   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:21.848481   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:21.855499   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:21.855530   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:22.349158   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:22.353345   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:22.353368   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:22.848954   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:22.853912   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 200:
	ok
	I0814 17:37:22.865096   79871 api_server.go:141] control plane version: v1.31.0
	I0814 17:37:22.865127   79871 api_server.go:131] duration metric: took 3.516999004s to wait for apiserver health ...
	I0814 17:37:22.865138   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:37:22.865146   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:22.866812   79871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:37:20.446123   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:22.446518   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:24.945729   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:22.867939   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:37:22.881586   79871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:37:22.899815   79871 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:37:22.910873   79871 system_pods.go:59] 8 kube-system pods found
	I0814 17:37:22.910928   79871 system_pods.go:61] "coredns-6f6b679f8f-mxc9v" [d1b9d422-faff-4709-a375-f8783e75e18c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:37:22.910946   79871 system_pods.go:61] "etcd-default-k8s-diff-port-885666" [a5473465-a1c1-4413-8e77-74fb1eb398a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:37:22.910956   79871 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-885666" [06c53e48-b156-42b1-b381-818f75821196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:37:22.910966   79871 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-885666" [18a2d7fb-4e18-4880-8812-63d25934699b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:37:22.910977   79871 system_pods.go:61] "kube-proxy-4rrff" [14453cc8-da7d-4dd4-b7fa-89a26dbbf23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:37:22.910995   79871 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-885666" [f0455f16-9a3e-4ede-8524-f701b1ab4ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:37:22.911005   79871 system_pods.go:61] "metrics-server-6867b74b74-qtzm8" [04c797ec-2e38-42a7-a023-5f60c451f780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:37:22.911020   79871 system_pods.go:61] "storage-provisioner" [88c2e8f0-0706-494a-8e83-0ede8f129040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:37:22.911032   79871 system_pods.go:74] duration metric: took 11.192968ms to wait for pod list to return data ...
	I0814 17:37:22.911044   79871 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:37:22.915096   79871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:37:22.915128   79871 node_conditions.go:123] node cpu capacity is 2
	I0814 17:37:22.915140   79871 node_conditions.go:105] duration metric: took 4.087198ms to run NodePressure ...
	I0814 17:37:22.915165   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:23.204612   79871 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:37:23.209643   79871 kubeadm.go:739] kubelet initialised
	I0814 17:37:23.209665   79871 kubeadm.go:740] duration metric: took 5.023123ms waiting for restarted kubelet to initialise ...
	I0814 17:37:23.209673   79871 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:23.215957   79871 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.221969   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.221993   79871 pod_ready.go:81] duration metric: took 6.011053ms for pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.222008   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.222014   79871 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.227119   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.227147   79871 pod_ready.go:81] duration metric: took 5.125006ms for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.227157   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.227163   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.231297   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.231321   79871 pod_ready.go:81] duration metric: took 4.149023ms for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.231346   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.231355   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:25.239956   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:21.555057   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:21.555530   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:21.555562   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:21.555484   81116 retry.go:31] will retry after 3.159225533s: waiting for machine to come up
	I0814 17:37:24.716173   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:24.716482   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:24.716507   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:24.716451   81116 retry.go:31] will retry after 3.32732131s: waiting for machine to come up
	I0814 17:37:29.512066   79367 start.go:364] duration metric: took 55.26941078s to acquireMachinesLock for "no-preload-545149"
	I0814 17:37:29.512115   79367 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:29.512123   79367 fix.go:54] fixHost starting: 
	I0814 17:37:29.512539   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:29.512574   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:29.529625   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0814 17:37:29.530074   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:29.530558   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:37:29.530585   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:29.530930   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:29.531149   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:29.531291   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:37:29.532999   79367 fix.go:112] recreateIfNeeded on no-preload-545149: state=Stopped err=<nil>
	I0814 17:37:29.533049   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	W0814 17:37:29.533224   79367 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:29.535000   79367 out.go:177] * Restarting existing kvm2 VM for "no-preload-545149" ...
	I0814 17:37:27.445398   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:29.945246   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:27.737698   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:29.737890   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:28.045690   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046151   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has current primary IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046177   80228 main.go:141] libmachine: (old-k8s-version-505584) Found IP for machine: 192.168.72.49
	I0814 17:37:28.046192   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserving static IP address...
	I0814 17:37:28.046500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.046524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | skip adding static IP to network mk-old-k8s-version-505584 - found existing host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"}
	I0814 17:37:28.046540   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserved static IP address: 192.168.72.49
	I0814 17:37:28.046559   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting for SSH to be available...
	I0814 17:37:28.046571   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Getting to WaitForSSH function...
	I0814 17:37:28.048709   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049082   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.049106   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049252   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH client type: external
	I0814 17:37:28.049285   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa (-rw-------)
	I0814 17:37:28.049325   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:28.049342   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | About to run SSH command:
	I0814 17:37:28.049356   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | exit 0
	I0814 17:37:28.179844   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:28.180193   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetConfigRaw
	I0814 17:37:28.180865   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.183617   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184074   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.184118   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184367   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:37:28.184641   80228 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:28.184663   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:28.184891   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.187158   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187517   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.187547   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187696   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.187857   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188027   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188178   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.188320   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.188570   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.188587   80228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:28.303564   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:28.303597   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.303831   80228 buildroot.go:166] provisioning hostname "old-k8s-version-505584"
	I0814 17:37:28.303856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.304021   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.306826   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307180   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.307210   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307415   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.307608   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307769   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307915   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.308131   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.308336   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.308354   80228 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-505584 && echo "old-k8s-version-505584" | sudo tee /etc/hostname
	I0814 17:37:28.434224   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-505584
	
	I0814 17:37:28.434261   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.437350   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437633   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.437666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.438077   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438245   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438395   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.438623   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.438832   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.438857   80228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-505584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-505584/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-505584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:28.564784   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:28.564815   80228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:28.564858   80228 buildroot.go:174] setting up certificates
	I0814 17:37:28.564872   80228 provision.go:84] configureAuth start
	I0814 17:37:28.564884   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.565188   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.568217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.568698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.568731   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.569013   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.571364   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571780   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.571805   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571961   80228 provision.go:143] copyHostCerts
	I0814 17:37:28.572023   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:28.572032   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:28.572076   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:28.572176   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:28.572184   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:28.572206   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:28.572275   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:28.572284   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:28.572337   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:28.572435   80228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-505584 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-505584]
	I0814 17:37:28.804798   80228 provision.go:177] copyRemoteCerts
	I0814 17:37:28.804853   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:28.804879   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.807967   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.808302   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808458   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.808690   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.808874   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.809001   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:28.900346   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:28.926959   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 17:37:28.955373   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:28.984436   80228 provision.go:87] duration metric: took 419.552519ms to configureAuth
	I0814 17:37:28.984463   80228 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:28.984630   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:37:28.984713   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.987602   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988077   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.988107   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988237   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.988486   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988641   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988768   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.988986   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.989209   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.989234   80228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:29.262630   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:29.262656   80228 machine.go:97] duration metric: took 1.078000469s to provisionDockerMachine
	I0814 17:37:29.262669   80228 start.go:293] postStartSetup for "old-k8s-version-505584" (driver="kvm2")
	I0814 17:37:29.262683   80228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:29.262704   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.263051   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:29.263082   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.266020   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.266495   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266720   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.266919   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.267093   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.267253   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.354027   80228 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:29.358196   80228 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:29.358224   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:29.358304   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:29.358416   80228 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:29.358543   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:29.367802   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:29.392802   80228 start.go:296] duration metric: took 130.117007ms for postStartSetup
	I0814 17:37:29.392846   80228 fix.go:56] duration metric: took 20.068754346s for fixHost
	I0814 17:37:29.392871   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.395638   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396032   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.396064   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396251   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.396516   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396698   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396893   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.397155   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:29.397326   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:29.397340   80228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:29.511889   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657049.468340520
	
	I0814 17:37:29.511913   80228 fix.go:216] guest clock: 1723657049.468340520
	I0814 17:37:29.511923   80228 fix.go:229] Guest: 2024-08-14 17:37:29.46834052 +0000 UTC Remote: 2024-08-14 17:37:29.392851248 +0000 UTC m=+223.104093144 (delta=75.489272ms)
	I0814 17:37:29.511983   80228 fix.go:200] guest clock delta is within tolerance: 75.489272ms
	I0814 17:37:29.511996   80228 start.go:83] releasing machines lock for "old-k8s-version-505584", held for 20.187937886s
	I0814 17:37:29.512031   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.512333   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:29.515152   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515487   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.515524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515735   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516299   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516497   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516643   80228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:29.516723   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.516727   80228 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:29.516752   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.519600   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.519751   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520017   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520045   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520164   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520192   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520341   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520423   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520520   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520588   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520646   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520718   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.520780   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.642824   80228 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:29.648834   80228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:29.795482   80228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:29.801407   80228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:29.801486   80228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:29.821662   80228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:29.821684   80228 start.go:495] detecting cgroup driver to use...
	I0814 17:37:29.821761   80228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:29.843829   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:29.859505   80228 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:29.859590   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:29.873790   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:29.889295   80228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:30.035909   80228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:30.209521   80228 docker.go:233] disabling docker service ...
	I0814 17:37:30.209574   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:30.226980   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:30.241678   80228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:30.375116   80228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:30.498357   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:30.512272   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:30.533062   80228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 17:37:30.533130   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.543595   80228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:30.543664   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.554139   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.564417   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.574627   80228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:30.584957   80228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:30.594667   80228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:30.594720   80228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:30.606826   80228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:30.621990   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:30.758992   80228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:30.915494   80228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:30.915572   80228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:30.920692   80228 start.go:563] Will wait 60s for crictl version
	I0814 17:37:30.920767   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:30.924365   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:30.964662   80228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:30.964756   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:30.995534   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:31.025400   80228 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 17:37:31.026943   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:31.030217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030630   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:31.030665   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030943   80228 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:31.034960   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:31.047742   80228 kubeadm.go:883] updating cluster {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:31.047864   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:37:31.047926   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:31.092203   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:31.092278   80228 ssh_runner.go:195] Run: which lz4
	I0814 17:37:31.096471   80228 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:37:31.100610   80228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:31.100642   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 17:37:29.536310   79367 main.go:141] libmachine: (no-preload-545149) Calling .Start
	I0814 17:37:29.536513   79367 main.go:141] libmachine: (no-preload-545149) Ensuring networks are active...
	I0814 17:37:29.537431   79367 main.go:141] libmachine: (no-preload-545149) Ensuring network default is active
	I0814 17:37:29.537935   79367 main.go:141] libmachine: (no-preload-545149) Ensuring network mk-no-preload-545149 is active
	I0814 17:37:29.538468   79367 main.go:141] libmachine: (no-preload-545149) Getting domain xml...
	I0814 17:37:29.539383   79367 main.go:141] libmachine: (no-preload-545149) Creating domain...
	I0814 17:37:30.863155   79367 main.go:141] libmachine: (no-preload-545149) Waiting to get IP...
	I0814 17:37:30.864257   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:30.864722   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:30.864812   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:30.864695   81248 retry.go:31] will retry after 244.342973ms: waiting for machine to come up
	I0814 17:37:31.111211   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.111784   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.111806   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.111735   81248 retry.go:31] will retry after 277.033145ms: waiting for machine to come up
	I0814 17:37:31.390071   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.390511   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.390554   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.390429   81248 retry.go:31] will retry after 320.225451ms: waiting for machine to come up
	I0814 17:37:31.949069   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:34.445833   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:31.741110   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:33.239418   79871 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:33.239449   79871 pod_ready.go:81] duration metric: took 10.008084028s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.239462   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4rrff" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.244600   79871 pod_ready.go:92] pod "kube-proxy-4rrff" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:33.244628   79871 pod_ready.go:81] duration metric: took 5.157296ms for pod "kube-proxy-4rrff" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.244648   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:35.253613   79871 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:35.253643   79871 pod_ready.go:81] duration metric: took 2.008985731s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:35.253657   79871 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:32.582064   80228 crio.go:462] duration metric: took 1.485645107s to copy over tarball
	I0814 17:37:32.582151   80228 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:35.556765   80228 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.974581109s)
	I0814 17:37:35.556795   80228 crio.go:469] duration metric: took 2.9747s to extract the tarball
	I0814 17:37:35.556802   80228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:35.599129   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:35.632752   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:35.632775   80228 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:35.632831   80228 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.632864   80228 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.632892   80228 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 17:37:35.632911   80228 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.632944   80228 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.633112   80228 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634793   80228 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.634821   80228 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 17:37:35.634824   80228 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634885   80228 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.634910   80228 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.635009   80228 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.635082   80228 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.635265   80228 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.905566   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 17:37:35.953168   80228 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 17:37:35.953210   80228 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 17:37:35.953260   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:35.957961   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:35.978859   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.978920   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.988556   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.993281   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.997933   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.018501   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.043527   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.146739   80228 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 17:37:36.146812   80228 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 17:37:36.146832   80228 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.146852   80228 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.146881   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.146891   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163832   80228 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 17:37:36.163856   80228 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 17:37:36.163877   80228 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.163889   80228 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.163923   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163924   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163927   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.172482   80228 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 17:37:36.172530   80228 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.172599   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.195157   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.195208   80228 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 17:37:36.195165   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.195242   80228 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.195245   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.195277   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.237454   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 17:37:36.237519   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.237549   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.292614   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.306771   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.306794   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:31.712067   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.712601   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.712630   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.712575   81248 retry.go:31] will retry after 546.687472ms: waiting for machine to come up
	I0814 17:37:32.261457   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:32.261921   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:32.261950   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:32.261854   81248 retry.go:31] will retry after 484.345236ms: waiting for machine to come up
	I0814 17:37:32.747475   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:32.748118   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:32.748149   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:32.748060   81248 retry.go:31] will retry after 899.564198ms: waiting for machine to come up
	I0814 17:37:33.649684   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:33.650206   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:33.650234   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:33.650155   81248 retry.go:31] will retry after 1.039934932s: waiting for machine to come up
	I0814 17:37:34.691741   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:34.692197   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:34.692220   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:34.692169   81248 retry.go:31] will retry after 925.402437ms: waiting for machine to come up
	I0814 17:37:35.618737   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:35.619169   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:35.619200   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:35.619102   81248 retry.go:31] will retry after 1.401066913s: waiting for machine to come up
	I0814 17:37:36.447039   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:38.945321   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:37.260912   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:39.759967   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:36.321893   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.339836   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.339929   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.426588   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.426659   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.433149   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.469717   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:36.477512   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.477583   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.477761   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.538635   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 17:37:36.557712   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 17:37:36.558304   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.700263   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 17:37:36.700333   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 17:37:36.700410   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 17:37:36.700481   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 17:37:36.700527   80228 cache_images.go:92] duration metric: took 1.067740607s to LoadCachedImages
	W0814 17:37:36.700602   80228 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 17:37:36.700623   80228 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0814 17:37:36.700757   80228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-505584 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:36.700846   80228 ssh_runner.go:195] Run: crio config
	I0814 17:37:36.748814   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:37:36.748843   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:36.748860   80228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:36.748885   80228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-505584 NodeName:old-k8s-version-505584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 17:37:36.749053   80228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-505584"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:36.749129   80228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 17:37:36.760058   80228 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:36.760131   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:36.769388   80228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0814 17:37:36.786594   80228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:36.807695   80228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0814 17:37:36.825609   80228 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:36.829296   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:36.841882   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:36.976199   80228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:36.993682   80228 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584 for IP: 192.168.72.49
	I0814 17:37:36.993707   80228 certs.go:194] generating shared ca certs ...
	I0814 17:37:36.993728   80228 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:36.993924   80228 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:36.993985   80228 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:36.993998   80228 certs.go:256] generating profile certs ...
	I0814 17:37:36.994115   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.key
	I0814 17:37:36.994206   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key.c375770f
	I0814 17:37:36.994261   80228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key
	I0814 17:37:36.994428   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:36.994478   80228 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:36.994492   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:36.994522   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:36.994557   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:36.994603   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:36.994661   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:36.995558   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:37.043910   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:37.073810   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:37.097939   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:37.124449   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 17:37:37.154747   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:37.179474   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:37.204471   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:37.228579   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:37.266929   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:37.292912   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:37.316803   80228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:37.332934   80228 ssh_runner.go:195] Run: openssl version
	I0814 17:37:37.339316   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:37.349829   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354230   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354297   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.360089   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:37.371417   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:37.381777   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385894   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385955   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.391826   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:37.402049   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:37.412038   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416395   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416448   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.421794   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:37.431868   80228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:37.436305   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:37.442838   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:37.448991   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:37.454769   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:37.460381   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:37.466406   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:37.472466   80228 kubeadm.go:392] StartCluster: {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:37.472584   80228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:37.472636   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.508256   80228 cri.go:89] found id: ""
	I0814 17:37:37.508323   80228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:37.518824   80228 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:37.518856   80228 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:37.518941   80228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:37.529328   80228 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:37.530242   80228 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-505584" does not appear in /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:37.530890   80228 kubeconfig.go:62] /home/jenkins/minikube-integration/19446-13977/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-505584" cluster setting kubeconfig missing "old-k8s-version-505584" context setting]
	I0814 17:37:37.531922   80228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:37.539843   80228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:37.550012   80228 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.49
	I0814 17:37:37.550051   80228 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:37.550063   80228 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:37.550113   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.590226   80228 cri.go:89] found id: ""
	I0814 17:37:37.590307   80228 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:37.606242   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:37.615340   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:37.615377   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:37.615436   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:37:37.623996   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:37.624063   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:37.633356   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:37:37.642888   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:37.642958   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:37.652532   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.661607   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:37.661679   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.670876   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:37:37.679780   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:37.679846   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:37.690044   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:37.699617   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:37.813799   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.666487   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.901307   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.029983   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.139056   80228 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:39.139133   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:39.639191   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.139315   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.639292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:41.139421   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:37.021766   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:37.022253   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:37.022282   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:37.022216   81248 retry.go:31] will retry after 2.184222941s: waiting for machine to come up
	I0814 17:37:39.209777   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:39.210239   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:39.210265   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:39.210203   81248 retry.go:31] will retry after 2.903962154s: waiting for machine to come up
	I0814 17:37:41.445413   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:43.949816   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:41.760985   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:44.260273   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:41.639312   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.139387   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.639981   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.139499   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.639391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.139425   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.639677   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.139466   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.639426   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:46.140065   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.116682   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:42.117116   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:42.117154   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:42.117086   81248 retry.go:31] will retry after 3.387467992s: waiting for machine to come up
	I0814 17:37:45.505680   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:45.506034   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:45.506056   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:45.505986   81248 retry.go:31] will retry after 2.944973353s: waiting for machine to come up
	I0814 17:37:46.443772   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:48.445046   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:46.759297   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:49.260881   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:46.640043   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.139213   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.639848   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.140080   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.639961   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.139473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.639212   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.139781   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.640028   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.140140   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.452516   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.453064   79367 main.go:141] libmachine: (no-preload-545149) Found IP for machine: 192.168.39.162
	I0814 17:37:48.453092   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has current primary IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.453099   79367 main.go:141] libmachine: (no-preload-545149) Reserving static IP address...
	I0814 17:37:48.453513   79367 main.go:141] libmachine: (no-preload-545149) Reserved static IP address: 192.168.39.162
	I0814 17:37:48.453564   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "no-preload-545149", mac: "52:54:00:d0:bd:d7", ip: "192.168.39.162"} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.453578   79367 main.go:141] libmachine: (no-preload-545149) Waiting for SSH to be available...
	I0814 17:37:48.453608   79367 main.go:141] libmachine: (no-preload-545149) DBG | skip adding static IP to network mk-no-preload-545149 - found existing host DHCP lease matching {name: "no-preload-545149", mac: "52:54:00:d0:bd:d7", ip: "192.168.39.162"}
	I0814 17:37:48.453630   79367 main.go:141] libmachine: (no-preload-545149) DBG | Getting to WaitForSSH function...
	I0814 17:37:48.455959   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.456279   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.456304   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.456429   79367 main.go:141] libmachine: (no-preload-545149) DBG | Using SSH client type: external
	I0814 17:37:48.456449   79367 main.go:141] libmachine: (no-preload-545149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa (-rw-------)
	I0814 17:37:48.456490   79367 main.go:141] libmachine: (no-preload-545149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:48.456506   79367 main.go:141] libmachine: (no-preload-545149) DBG | About to run SSH command:
	I0814 17:37:48.456520   79367 main.go:141] libmachine: (no-preload-545149) DBG | exit 0
	I0814 17:37:48.579489   79367 main.go:141] libmachine: (no-preload-545149) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:48.579924   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetConfigRaw
	I0814 17:37:48.580615   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:48.583202   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.583545   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.583592   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.583857   79367 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/config.json ...
	I0814 17:37:48.584093   79367 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:48.584113   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:48.584340   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.586712   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.587086   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.587107   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.587259   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.587441   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.587593   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.587720   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.587869   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.588029   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.588040   79367 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:48.691255   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:48.691290   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.691555   79367 buildroot.go:166] provisioning hostname "no-preload-545149"
	I0814 17:37:48.691593   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.691798   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.694492   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.694768   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.694797   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.694907   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.695084   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.695248   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.695396   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.695556   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.695777   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.695798   79367 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-545149 && echo "no-preload-545149" | sudo tee /etc/hostname
	I0814 17:37:48.813509   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-545149
	
	I0814 17:37:48.813537   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.816304   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.816698   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.816732   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.816884   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.817057   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.817265   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.817393   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.817586   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.817809   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.817836   79367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-545149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-545149/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-545149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:48.927482   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:48.927512   79367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:48.927540   79367 buildroot.go:174] setting up certificates
	I0814 17:37:48.927551   79367 provision.go:84] configureAuth start
	I0814 17:37:48.927567   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.927831   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:48.930532   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.930879   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.930906   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.931104   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.933420   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.933754   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.933783   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.933893   79367 provision.go:143] copyHostCerts
	I0814 17:37:48.933968   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:48.933979   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:48.934040   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:48.934146   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:48.934156   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:48.934186   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:48.934262   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:48.934271   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:48.934302   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:48.934377   79367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.no-preload-545149 san=[127.0.0.1 192.168.39.162 localhost minikube no-preload-545149]
	I0814 17:37:49.287517   79367 provision.go:177] copyRemoteCerts
	I0814 17:37:49.287580   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:49.287607   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.290280   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.290667   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.290690   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.290856   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.291063   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.291180   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.291304   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.374716   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:49.398652   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:37:49.422885   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:37:49.448774   79367 provision.go:87] duration metric: took 521.207251ms to configureAuth
	I0814 17:37:49.448800   79367 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:49.448972   79367 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:49.449064   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.452034   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.452373   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.452403   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.452604   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.452859   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.453058   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.453217   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.453388   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:49.453579   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:49.453601   79367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:49.711896   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:49.711922   79367 machine.go:97] duration metric: took 1.127817152s to provisionDockerMachine
	I0814 17:37:49.711933   79367 start.go:293] postStartSetup for "no-preload-545149" (driver="kvm2")
	I0814 17:37:49.711942   79367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:49.711977   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.712299   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:49.712324   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.714736   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.715059   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.715097   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.715232   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.715428   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.715616   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.715769   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.797746   79367 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:49.801764   79367 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:49.801794   79367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:49.801863   79367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:49.801960   79367 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:49.802081   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:49.811506   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:49.834762   79367 start.go:296] duration metric: took 122.81358ms for postStartSetup
	I0814 17:37:49.834812   79367 fix.go:56] duration metric: took 20.32268926s for fixHost
	I0814 17:37:49.834837   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.837418   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.837739   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.837768   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.837903   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.838114   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.838292   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.838438   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.838643   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:49.838838   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:49.838850   79367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:49.944936   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657069.919883473
	
	I0814 17:37:49.944965   79367 fix.go:216] guest clock: 1723657069.919883473
	I0814 17:37:49.944975   79367 fix.go:229] Guest: 2024-08-14 17:37:49.919883473 +0000 UTC Remote: 2024-08-14 17:37:49.834818813 +0000 UTC m=+358.184638535 (delta=85.06466ms)
	I0814 17:37:49.945005   79367 fix.go:200] guest clock delta is within tolerance: 85.06466ms
	I0814 17:37:49.945017   79367 start.go:83] releasing machines lock for "no-preload-545149", held for 20.432923283s
	I0814 17:37:49.945044   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.945291   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:49.947847   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.948269   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.948295   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.948500   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949082   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949262   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949347   79367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:49.949406   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.949517   79367 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:49.949541   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.952281   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952328   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952667   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.952692   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952833   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.952836   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.952895   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.953037   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.953075   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.953201   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.953212   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.953400   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.953412   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.953543   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:50.072094   79367 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:50.080210   79367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:50.227736   79367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:50.233533   79367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:50.233597   79367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:50.249452   79367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:50.249474   79367 start.go:495] detecting cgroup driver to use...
	I0814 17:37:50.249552   79367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:50.265740   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:50.278769   79367 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:50.278833   79367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:50.291625   79367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:50.304529   79367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:50.415405   79367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:50.556016   79367 docker.go:233] disabling docker service ...
	I0814 17:37:50.556092   79367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:50.570197   79367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:50.583068   79367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:50.721414   79367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:50.850890   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:50.864530   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:50.882021   79367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:37:50.882097   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.891490   79367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:50.891564   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.901437   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.911316   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.920935   79367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:50.930571   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.940106   79367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.957351   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.967222   79367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:50.976120   79367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:50.976170   79367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:50.990922   79367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:51.000086   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:51.116655   79367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:51.246182   79367 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:51.246265   79367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:51.250838   79367 start.go:563] Will wait 60s for crictl version
	I0814 17:37:51.250900   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.254633   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:51.299890   79367 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:51.299992   79367 ssh_runner.go:195] Run: crio --version
	I0814 17:37:51.328292   79367 ssh_runner.go:195] Run: crio --version
	I0814 17:37:51.360415   79367 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:37:51.361536   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:51.364443   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:51.364884   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:51.364914   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:51.365112   79367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:51.368941   79367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:51.380519   79367 kubeadm.go:883] updating cluster {Name:no-preload-545149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:51.380668   79367 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:37:51.380705   79367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:51.413314   79367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:37:51.413346   79367 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:51.413417   79367 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.413435   79367 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.413452   79367 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.413395   79367 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:51.413473   79367 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 17:37:51.413440   79367 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.413521   79367 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.413529   79367 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.414920   79367 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:51.414940   79367 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 17:37:51.414983   79367 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.415006   79367 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.415010   79367 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.414982   79367 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.415070   79367 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.415100   79367 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.664642   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.686463   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:50.445457   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:52.945915   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:51.762809   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:54.259593   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:51.639969   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.139918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.639403   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.139921   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.640224   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.140272   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.639242   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.139908   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.639233   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:56.139955   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.699627   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0814 17:37:51.718031   79367 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0814 17:37:51.718085   79367 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.718133   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.736370   79367 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0814 17:37:51.736408   79367 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.736454   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.779229   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.800986   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.819343   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.841240   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.853614   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.853650   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.853753   79367 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0814 17:37:51.853798   79367 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.853842   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.866717   79367 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0814 17:37:51.866757   79367 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.866807   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.908593   79367 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0814 17:37:51.908644   79367 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.908701   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.936701   79367 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0814 17:37:51.936737   79367 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.936784   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.944882   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.944962   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.944983   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.945008   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.945070   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.945089   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.063281   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:52.080543   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:52.080556   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.080574   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:52.080629   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:52.080647   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:52.126573   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:52.205600   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:52.205623   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.236617   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 17:37:52.236678   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:52.236757   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.237083   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 17:37:52.237161   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:52.238804   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0814 17:37:52.238891   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:52.294945   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 17:37:52.295018   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 17:37:52.295064   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:37:52.295103   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0814 17:37:52.295127   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.295189   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.295110   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:37:52.302365   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 17:37:52.302388   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0814 17:37:52.302423   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0814 17:37:52.302472   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:37:52.306933   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0814 17:37:52.307107   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0814 17:37:52.309298   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:54.271998   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.976780716s)
	I0814 17:37:54.272032   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0814 17:37:54.272053   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:54.272063   79367 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.962736886s)
	I0814 17:37:54.272100   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:54.271998   79367 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (1.969503874s)
	I0814 17:37:54.272150   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0814 17:37:54.272105   79367 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0814 17:37:54.272192   79367 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:54.272250   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:56.021236   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.749108117s)
	I0814 17:37:56.021281   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0814 17:37:56.021288   79367 ssh_runner.go:235] Completed: which crictl: (1.749013682s)
	I0814 17:37:56.021309   79367 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:56.021370   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:56.021386   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:55.445017   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:57.445204   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:59.945329   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:56.260666   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:58.760907   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:56.639799   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.140184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.639918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.139310   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.639393   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.140139   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.639614   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.139472   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.139946   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.830150   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.808753337s)
	I0814 17:37:59.830181   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0814 17:37:59.830205   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:37:59.830208   79367 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.80880721s)
	I0814 17:37:59.830253   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:59.830255   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:38:02.444320   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:04.444667   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:01.260951   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:03.759895   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:01.639422   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.139858   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.639412   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.140047   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.640170   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.139779   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.639728   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.139343   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.640249   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.139448   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.796675   79367 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.966400982s)
	I0814 17:38:01.796690   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.966414051s)
	I0814 17:38:01.796708   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0814 17:38:01.796735   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:38:01.796757   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:38:01.796796   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:38:01.841898   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 17:38:01.841994   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:03.571965   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.775142217s)
	I0814 17:38:03.571991   79367 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.729967853s)
	I0814 17:38:03.572002   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0814 17:38:03.572019   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0814 17:38:03.572028   79367 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:03.572079   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:04.422670   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 17:38:04.422705   79367 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:38:04.422760   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:38:06.277419   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.854632861s)
	I0814 17:38:06.277457   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0814 17:38:06.277488   79367 cache_images.go:123] Successfully loaded all cached images
	I0814 17:38:06.277494   79367 cache_images.go:92] duration metric: took 14.864134758s to LoadCachedImages
	I0814 17:38:06.277504   79367 kubeadm.go:934] updating node { 192.168.39.162 8443 v1.31.0 crio true true} ...
	I0814 17:38:06.277628   79367 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-545149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:38:06.277690   79367 ssh_runner.go:195] Run: crio config
	I0814 17:38:06.337971   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:38:06.337990   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:38:06.337999   79367 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:38:06.338019   79367 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-545149 NodeName:no-preload-545149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:38:06.338148   79367 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-545149"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:38:06.338222   79367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:38:06.348156   79367 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:38:06.348219   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:38:06.356784   79367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0814 17:38:06.372439   79367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:38:06.388610   79367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0814 17:38:06.405084   79367 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I0814 17:38:06.408753   79367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:38:06.420313   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:38:06.546115   79367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:38:06.563747   79367 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149 for IP: 192.168.39.162
	I0814 17:38:06.563776   79367 certs.go:194] generating shared ca certs ...
	I0814 17:38:06.563799   79367 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:38:06.563973   79367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:38:06.564035   79367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:38:06.564058   79367 certs.go:256] generating profile certs ...
	I0814 17:38:06.564150   79367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/client.key
	I0814 17:38:06.564207   79367 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.key.d0704694
	I0814 17:38:06.564241   79367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.key
	I0814 17:38:06.564349   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:38:06.564377   79367 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:38:06.564386   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:38:06.564411   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:38:06.564437   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:38:06.564459   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:38:06.564497   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:38:06.565269   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:38:06.592622   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:38:06.619148   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:38:06.646169   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:38:06.682399   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 17:38:06.446354   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:08.948005   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:05.760991   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:08.260189   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:10.260816   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:06.639416   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.140176   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.639682   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.140063   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.640014   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.139435   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.639256   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.139949   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.640283   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:11.139394   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.714195   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:38:06.750431   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:38:06.772702   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:38:06.793932   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:38:06.815601   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:38:06.837187   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:38:06.858175   79367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:38:06.876187   79367 ssh_runner.go:195] Run: openssl version
	I0814 17:38:06.881909   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:38:06.892057   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.896191   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.896251   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.901630   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:38:06.910888   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:38:06.920223   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.924480   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.924527   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.929591   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:38:06.939072   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:38:06.949970   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.954288   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.954339   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.959551   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:38:06.969505   79367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:38:06.973905   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:38:06.980211   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:38:06.986779   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:38:06.992220   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:38:06.997446   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:38:07.002681   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:38:07.008037   79367 kubeadm.go:392] StartCluster: {Name:no-preload-545149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:38:07.008131   79367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:38:07.008188   79367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:38:07.043144   79367 cri.go:89] found id: ""
	I0814 17:38:07.043214   79367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:38:07.052215   79367 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:38:07.052233   79367 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:38:07.052281   79367 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:38:07.060618   79367 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:38:07.061557   79367 kubeconfig.go:125] found "no-preload-545149" server: "https://192.168.39.162:8443"
	I0814 17:38:07.063554   79367 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:38:07.072026   79367 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I0814 17:38:07.072064   79367 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:38:07.072075   79367 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:38:07.072117   79367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:38:07.109349   79367 cri.go:89] found id: ""
	I0814 17:38:07.109412   79367 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:38:07.126888   79367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:38:07.138272   79367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:38:07.138293   79367 kubeadm.go:157] found existing configuration files:
	
	I0814 17:38:07.138367   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:38:07.147160   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:38:07.147220   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:38:07.156664   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:38:07.165122   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:38:07.165167   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:38:07.173478   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:38:07.181391   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:38:07.181449   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:38:07.189750   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:38:07.198215   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:38:07.198274   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:38:07.207384   79367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:38:07.216034   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:07.337710   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.227720   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.455979   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.521250   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.654574   79367 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:38:08.654684   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.155639   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.655182   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.696193   79367 api_server.go:72] duration metric: took 1.041620068s to wait for apiserver process to appear ...
	I0814 17:38:09.696223   79367 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:38:09.696241   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:09.696703   79367 api_server.go:269] stopped: https://192.168.39.162:8443/healthz: Get "https://192.168.39.162:8443/healthz": dial tcp 192.168.39.162:8443: connect: connection refused
	I0814 17:38:10.197180   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.389673   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:38:12.389702   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:38:12.389717   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.403106   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:38:12.403138   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:38:12.696486   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.700755   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:38:12.700784   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:38:13.196293   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:13.200564   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:38:13.200592   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:38:13.697253   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:13.705430   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I0814 17:38:13.732816   79367 api_server.go:141] control plane version: v1.31.0
	I0814 17:38:13.732843   79367 api_server.go:131] duration metric: took 4.036614106s to wait for apiserver health ...
	I0814 17:38:13.732852   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:38:13.732860   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:38:13.734904   79367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:38:11.444846   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:13.943583   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:12.759294   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:14.760919   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:11.640107   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.140034   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.639463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.139428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.639575   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.140005   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.639473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.140124   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.639459   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:16.139187   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.736533   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:38:13.756650   79367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:38:13.776947   79367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:38:13.803170   79367 system_pods.go:59] 8 kube-system pods found
	I0814 17:38:13.803214   79367 system_pods.go:61] "coredns-6f6b679f8f-tt46z" [169beaf0-0310-47d5-b212-9a81c6b3df68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:38:13.803228   79367 system_pods.go:61] "etcd-no-preload-545149" [47e22bb4-bedb-433f-ae2e-f281269c6e87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:38:13.803240   79367 system_pods.go:61] "kube-apiserver-no-preload-545149" [37854a66-b05b-49fe-834b-98f724087ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:38:13.803249   79367 system_pods.go:61] "kube-controller-manager-no-preload-545149" [69189ec1-6f8c-4613-bf47-46e101a14ecd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:38:13.803307   79367 system_pods.go:61] "kube-proxy-gfrqp" [2206243d-f6e0-462c-969c-60e192038700] Running
	I0814 17:38:13.803314   79367 system_pods.go:61] "kube-scheduler-no-preload-545149" [0bbd41bd-0a18-486b-b78c-9a0e9efe209a] Running
	I0814 17:38:13.803322   79367 system_pods.go:61] "metrics-server-6867b74b74-8c2cx" [b30f3018-f316-4997-a8fa-ff6c83aa7dd7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:38:13.803341   79367 system_pods.go:61] "storage-provisioner" [635027cc-ac5d-4474-a243-ef48b3580998] Running
	I0814 17:38:13.803349   79367 system_pods.go:74] duration metric: took 26.377795ms to wait for pod list to return data ...
	I0814 17:38:13.803357   79367 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:38:13.814093   79367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:38:13.814120   79367 node_conditions.go:123] node cpu capacity is 2
	I0814 17:38:13.814131   79367 node_conditions.go:105] duration metric: took 10.768606ms to run NodePressure ...
	I0814 17:38:13.814147   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:14.196481   79367 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:38:14.202205   79367 kubeadm.go:739] kubelet initialised
	I0814 17:38:14.202239   79367 kubeadm.go:740] duration metric: took 5.723699ms waiting for restarted kubelet to initialise ...
	I0814 17:38:14.202250   79367 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:38:14.209431   79367 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.215568   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.215597   79367 pod_ready.go:81] duration metric: took 6.13175ms for pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.215610   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.215620   79367 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.227611   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "etcd-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.227647   79367 pod_ready.go:81] duration metric: took 12.016107ms for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.227661   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "etcd-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.227669   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.235095   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "kube-apiserver-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.235130   79367 pod_ready.go:81] duration metric: took 7.452418ms for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.235143   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "kube-apiserver-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.235153   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.244417   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.244447   79367 pod_ready.go:81] duration metric: took 9.283911ms for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.244459   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.244466   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gfrqp" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.999946   79367 pod_ready.go:92] pod "kube-proxy-gfrqp" in "kube-system" namespace has status "Ready":"True"
	I0814 17:38:14.999968   79367 pod_ready.go:81] duration metric: took 755.491905ms for pod "kube-proxy-gfrqp" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.999977   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:15.945421   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:18.444758   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:16.761265   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:19.260117   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:16.639219   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.139463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.639839   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.140251   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.639890   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.139999   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.639652   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.139316   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.639809   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:21.139471   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.005796   79367 pod_ready.go:102] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:19.006769   79367 pod_ready.go:102] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:20.506792   79367 pod_ready.go:92] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:38:20.506815   79367 pod_ready.go:81] duration metric: took 5.50683258s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:20.506825   79367 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:20.445449   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:22.446622   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:24.943859   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:21.760870   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:23.761708   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:21.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.139292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.640151   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.139450   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.639996   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.139447   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.639267   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.139595   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.639451   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:26.140190   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.513577   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:25.012936   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.945216   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:29.444769   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.260276   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:28.263789   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.640120   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.140141   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.640184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.139896   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.140246   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.639895   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.139860   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.639358   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:31.140029   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.014354   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:29.516049   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:31.944967   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:34.444885   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:30.760174   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:33.259870   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:35.260137   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:31.639317   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.140039   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.139240   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.640181   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.139789   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.639297   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.139871   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.639347   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:36.140044   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.013464   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:34.513348   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.513741   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.944347   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:38.945374   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:37.759866   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:39.760334   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.640132   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.139254   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.639457   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.139928   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.639196   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:39.139906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:39.139980   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:39.179494   80228 cri.go:89] found id: ""
	I0814 17:38:39.179524   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.179535   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:39.179543   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:39.179619   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:39.210704   80228 cri.go:89] found id: ""
	I0814 17:38:39.210732   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.210741   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:39.210746   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:39.210796   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:39.247562   80228 cri.go:89] found id: ""
	I0814 17:38:39.247590   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.247597   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:39.247603   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:39.247665   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:39.281456   80228 cri.go:89] found id: ""
	I0814 17:38:39.281480   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.281488   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:39.281494   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:39.281553   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:39.318588   80228 cri.go:89] found id: ""
	I0814 17:38:39.318620   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.318630   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:39.318638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:39.318695   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:39.350270   80228 cri.go:89] found id: ""
	I0814 17:38:39.350294   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.350303   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:39.350311   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:39.350387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:39.382168   80228 cri.go:89] found id: ""
	I0814 17:38:39.382198   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.382209   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:39.382215   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:39.382325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:39.415307   80228 cri.go:89] found id: ""
	I0814 17:38:39.415342   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.415354   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:39.415375   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:39.415388   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:39.469591   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:39.469632   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:39.482909   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:39.482942   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:39.609874   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:39.609906   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:39.609921   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:39.683210   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:39.683253   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:39.013876   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:41.513527   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:41.444286   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:43.444539   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:42.260548   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:44.263171   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:42.222687   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:42.235017   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:42.235088   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:42.285518   80228 cri.go:89] found id: ""
	I0814 17:38:42.285544   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.285553   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:42.285559   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:42.285614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:42.320462   80228 cri.go:89] found id: ""
	I0814 17:38:42.320492   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.320500   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:42.320506   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:42.320594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:42.353484   80228 cri.go:89] found id: ""
	I0814 17:38:42.353515   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.353528   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:42.353537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:42.353614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:42.388122   80228 cri.go:89] found id: ""
	I0814 17:38:42.388152   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.388163   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:42.388171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:42.388239   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:42.420246   80228 cri.go:89] found id: ""
	I0814 17:38:42.420275   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.420285   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:42.420293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:42.420359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:42.454636   80228 cri.go:89] found id: ""
	I0814 17:38:42.454669   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.454680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:42.454687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:42.454749   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:42.494638   80228 cri.go:89] found id: ""
	I0814 17:38:42.494670   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.494679   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:42.494686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:42.494751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:42.532224   80228 cri.go:89] found id: ""
	I0814 17:38:42.532257   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.532269   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:42.532281   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:42.532296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:42.546100   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:42.546132   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:42.616561   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:42.616589   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:42.616604   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:42.697269   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:42.697305   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:42.737787   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:42.737821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.293788   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:45.309020   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:45.309080   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:45.349218   80228 cri.go:89] found id: ""
	I0814 17:38:45.349246   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.349254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:45.349260   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:45.349318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:45.387622   80228 cri.go:89] found id: ""
	I0814 17:38:45.387653   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.387664   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:45.387672   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:45.387750   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:45.422120   80228 cri.go:89] found id: ""
	I0814 17:38:45.422154   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.422164   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:45.422169   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:45.422226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:45.457309   80228 cri.go:89] found id: ""
	I0814 17:38:45.457337   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.457352   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:45.457361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:45.457412   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:45.488969   80228 cri.go:89] found id: ""
	I0814 17:38:45.489000   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.489011   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:45.489019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:45.489081   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:45.522230   80228 cri.go:89] found id: ""
	I0814 17:38:45.522258   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.522273   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:45.522280   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:45.522345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:45.555394   80228 cri.go:89] found id: ""
	I0814 17:38:45.555425   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.555440   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:45.555448   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:45.555520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:45.587870   80228 cri.go:89] found id: ""
	I0814 17:38:45.587899   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.587910   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:45.587934   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:45.587951   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.638662   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:45.638709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:45.652217   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:45.652248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:45.733611   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:45.733635   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:45.733648   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:45.822733   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:45.822773   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:44.013405   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:46.014164   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:45.445289   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:47.944672   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:46.760279   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:49.260108   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:48.361519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:48.374848   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:48.374916   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:48.410849   80228 cri.go:89] found id: ""
	I0814 17:38:48.410897   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.410911   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:48.410920   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:48.410986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:48.448507   80228 cri.go:89] found id: ""
	I0814 17:38:48.448530   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.448537   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:48.448543   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:48.448594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:48.486257   80228 cri.go:89] found id: ""
	I0814 17:38:48.486298   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.486306   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:48.486312   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:48.486363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:48.520447   80228 cri.go:89] found id: ""
	I0814 17:38:48.520473   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.520482   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:48.520487   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:48.520544   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:48.552659   80228 cri.go:89] found id: ""
	I0814 17:38:48.552690   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.552698   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:48.552704   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:48.552768   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:48.585302   80228 cri.go:89] found id: ""
	I0814 17:38:48.585331   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.585341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:48.585348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:48.585415   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:48.617388   80228 cri.go:89] found id: ""
	I0814 17:38:48.617417   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.617428   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:48.617435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:48.617503   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:48.658987   80228 cri.go:89] found id: ""
	I0814 17:38:48.659012   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.659019   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:48.659027   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:48.659041   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:48.719882   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:48.719915   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:48.738962   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:48.738994   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:48.807703   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:48.807727   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:48.807739   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:48.886555   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:48.886585   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:48.514199   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.013627   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:50.444135   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:52.444957   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:54.446434   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.760518   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:54.260283   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.423653   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:51.436700   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:51.436792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:51.473198   80228 cri.go:89] found id: ""
	I0814 17:38:51.473227   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.473256   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:51.473262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:51.473311   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:51.508631   80228 cri.go:89] found id: ""
	I0814 17:38:51.508664   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.508675   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:51.508682   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:51.508743   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:51.540917   80228 cri.go:89] found id: ""
	I0814 17:38:51.540950   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.540958   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:51.540965   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:51.541014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:51.578112   80228 cri.go:89] found id: ""
	I0814 17:38:51.578140   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.578150   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:51.578158   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:51.578220   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:51.612664   80228 cri.go:89] found id: ""
	I0814 17:38:51.612692   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.612700   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:51.612706   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:51.612756   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:51.646374   80228 cri.go:89] found id: ""
	I0814 17:38:51.646399   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.646407   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:51.646413   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:51.646463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:51.682052   80228 cri.go:89] found id: ""
	I0814 17:38:51.682081   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.682092   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:51.682098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:51.682147   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:51.722625   80228 cri.go:89] found id: ""
	I0814 17:38:51.722653   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.722663   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:51.722674   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:51.722687   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:51.771788   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:51.771818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:51.785403   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:51.785432   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:51.854249   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:51.854269   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:51.854281   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:51.938121   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:51.938155   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.475672   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:54.491309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:54.491399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:54.524971   80228 cri.go:89] found id: ""
	I0814 17:38:54.525001   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.525011   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:54.525023   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:54.525087   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:54.560631   80228 cri.go:89] found id: ""
	I0814 17:38:54.560661   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.560670   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:54.560675   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:54.560728   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:54.595710   80228 cri.go:89] found id: ""
	I0814 17:38:54.595740   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.595751   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:54.595759   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:54.595824   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:54.631449   80228 cri.go:89] found id: ""
	I0814 17:38:54.631476   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.631487   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:54.631495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:54.631557   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:54.666492   80228 cri.go:89] found id: ""
	I0814 17:38:54.666526   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.666539   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:54.666548   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:54.666617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:54.701111   80228 cri.go:89] found id: ""
	I0814 17:38:54.701146   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.701158   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:54.701166   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:54.701226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:54.737535   80228 cri.go:89] found id: ""
	I0814 17:38:54.737574   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.737585   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:54.737595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:54.737653   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:54.771658   80228 cri.go:89] found id: ""
	I0814 17:38:54.771679   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.771686   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:54.771694   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:54.771709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:54.841798   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:54.841817   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:54.841829   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:54.930861   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:54.930917   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.970508   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:54.970540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:55.023077   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:55.023123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:53.513137   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.014005   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.945376   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:59.445437   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.260436   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:58.759613   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:57.538876   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:57.551796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:57.551868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:57.584576   80228 cri.go:89] found id: ""
	I0814 17:38:57.584601   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.584609   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:57.584617   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:57.584687   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:57.617209   80228 cri.go:89] found id: ""
	I0814 17:38:57.617239   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.617249   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:57.617257   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:57.617338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:57.650062   80228 cri.go:89] found id: ""
	I0814 17:38:57.650089   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.650096   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:57.650102   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:57.650160   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:57.681118   80228 cri.go:89] found id: ""
	I0814 17:38:57.681146   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.681154   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:57.681160   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:57.681228   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:57.713803   80228 cri.go:89] found id: ""
	I0814 17:38:57.713834   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.713842   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:57.713851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:57.713920   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:57.749555   80228 cri.go:89] found id: ""
	I0814 17:38:57.749594   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.749604   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:57.749613   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:57.749677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:57.782714   80228 cri.go:89] found id: ""
	I0814 17:38:57.782744   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.782755   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:57.782763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:57.782826   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:57.815386   80228 cri.go:89] found id: ""
	I0814 17:38:57.815414   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.815423   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:57.815436   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:57.815450   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:57.868153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:57.868183   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:57.881022   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:57.881053   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:57.950474   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:57.950501   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:57.950515   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:58.032611   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:58.032644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:00.569493   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:00.583257   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:00.583384   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:00.614680   80228 cri.go:89] found id: ""
	I0814 17:39:00.614712   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.614723   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:00.614732   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:00.614792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:00.648161   80228 cri.go:89] found id: ""
	I0814 17:39:00.648189   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.648196   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:00.648203   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:00.648256   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:00.681844   80228 cri.go:89] found id: ""
	I0814 17:39:00.681872   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.681883   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:00.681890   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:00.681952   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:00.714773   80228 cri.go:89] found id: ""
	I0814 17:39:00.714804   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.714815   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:00.714823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:00.714891   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:00.747748   80228 cri.go:89] found id: ""
	I0814 17:39:00.747774   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.747781   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:00.747787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:00.747845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:00.783135   80228 cri.go:89] found id: ""
	I0814 17:39:00.783168   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.783179   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:00.783186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:00.783242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:00.817505   80228 cri.go:89] found id: ""
	I0814 17:39:00.817541   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.817552   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:00.817567   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:00.817633   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:00.849205   80228 cri.go:89] found id: ""
	I0814 17:39:00.849231   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.849241   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:00.849252   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:00.849273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:00.902529   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:00.902567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:00.916313   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:00.916346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:00.988708   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:00.988725   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:00.988737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:01.063818   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:01.063853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:58.512313   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:00.513694   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:01.944987   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.945640   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:00.759979   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.259928   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.603241   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:03.616400   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:03.616504   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:03.649580   80228 cri.go:89] found id: ""
	I0814 17:39:03.649619   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.649637   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:03.649650   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:03.649718   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:03.686252   80228 cri.go:89] found id: ""
	I0814 17:39:03.686274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.686282   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:03.686289   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:03.686349   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:03.720995   80228 cri.go:89] found id: ""
	I0814 17:39:03.721024   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.721036   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:03.721043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:03.721094   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:03.753466   80228 cri.go:89] found id: ""
	I0814 17:39:03.753491   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.753500   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:03.753506   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:03.753554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:03.794427   80228 cri.go:89] found id: ""
	I0814 17:39:03.794450   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.794458   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:03.794464   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:03.794524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:03.826245   80228 cri.go:89] found id: ""
	I0814 17:39:03.826274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.826282   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:03.826288   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:03.826355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:03.857208   80228 cri.go:89] found id: ""
	I0814 17:39:03.857238   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.857247   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:03.857253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:03.857325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:03.892840   80228 cri.go:89] found id: ""
	I0814 17:39:03.892864   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.892871   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:03.892879   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:03.892891   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:03.948554   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:03.948579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:03.962222   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:03.962249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:04.031833   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:04.031859   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:04.031875   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:04.109572   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:04.109636   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:03.013542   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:05.513201   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:06.444222   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:08.444833   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:05.759653   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:07.760063   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.259961   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:06.646923   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:06.659699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:06.659757   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:06.691918   80228 cri.go:89] found id: ""
	I0814 17:39:06.691941   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.691951   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:06.691958   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:06.692016   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:06.722675   80228 cri.go:89] found id: ""
	I0814 17:39:06.722703   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.722713   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:06.722720   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:06.722782   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:06.757215   80228 cri.go:89] found id: ""
	I0814 17:39:06.757248   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.757259   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:06.757266   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:06.757333   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:06.791337   80228 cri.go:89] found id: ""
	I0814 17:39:06.791370   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.791378   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:06.791384   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:06.791440   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:06.825182   80228 cri.go:89] found id: ""
	I0814 17:39:06.825209   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.825220   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:06.825234   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:06.825288   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:06.857473   80228 cri.go:89] found id: ""
	I0814 17:39:06.857498   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.857507   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:06.857514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:06.857582   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:06.891293   80228 cri.go:89] found id: ""
	I0814 17:39:06.891343   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.891355   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:06.891363   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:06.891421   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:06.927476   80228 cri.go:89] found id: ""
	I0814 17:39:06.927505   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.927516   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:06.927527   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:06.927541   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:06.980604   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:06.980635   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:06.994648   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:06.994678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:07.072554   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:07.072580   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:07.072599   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:07.153141   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:07.153182   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:09.693348   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:09.705754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:09.705814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:09.739674   80228 cri.go:89] found id: ""
	I0814 17:39:09.739706   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.739717   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:09.739724   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:09.739788   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:09.774381   80228 cri.go:89] found id: ""
	I0814 17:39:09.774405   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.774413   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:09.774420   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:09.774478   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:09.806586   80228 cri.go:89] found id: ""
	I0814 17:39:09.806614   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.806623   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:09.806629   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:09.806696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:09.839564   80228 cri.go:89] found id: ""
	I0814 17:39:09.839594   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.839602   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:09.839614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:09.839672   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:09.872338   80228 cri.go:89] found id: ""
	I0814 17:39:09.872373   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.872385   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:09.872393   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:09.872457   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:09.904184   80228 cri.go:89] found id: ""
	I0814 17:39:09.904223   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.904231   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:09.904253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:09.904312   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:09.937217   80228 cri.go:89] found id: ""
	I0814 17:39:09.937242   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.937251   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:09.937259   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:09.937322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:09.972273   80228 cri.go:89] found id: ""
	I0814 17:39:09.972301   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.972313   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:09.972325   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:09.972341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:10.023736   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:10.023764   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:10.036654   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:10.036681   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:10.104031   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:10.104052   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:10.104068   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:10.187816   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:10.187853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:08.013632   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.513090   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.944491   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.945542   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.260129   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:14.759867   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.727237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:12.741970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:12.742041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:12.778721   80228 cri.go:89] found id: ""
	I0814 17:39:12.778748   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.778758   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:12.778765   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:12.778820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:12.812575   80228 cri.go:89] found id: ""
	I0814 17:39:12.812603   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.812610   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:12.812619   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:12.812678   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:12.845697   80228 cri.go:89] found id: ""
	I0814 17:39:12.845726   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.845737   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:12.845744   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:12.845809   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:12.879491   80228 cri.go:89] found id: ""
	I0814 17:39:12.879518   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.879529   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:12.879536   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:12.879604   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:12.912321   80228 cri.go:89] found id: ""
	I0814 17:39:12.912348   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.912356   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:12.912361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:12.912410   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:12.948866   80228 cri.go:89] found id: ""
	I0814 17:39:12.948889   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.948897   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:12.948903   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:12.948963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:12.983394   80228 cri.go:89] found id: ""
	I0814 17:39:12.983444   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.983459   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:12.983466   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:12.983530   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:13.018406   80228 cri.go:89] found id: ""
	I0814 17:39:13.018427   80228 logs.go:276] 0 containers: []
	W0814 17:39:13.018434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:13.018442   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:13.018457   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.069615   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:13.069655   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:13.082618   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:13.082651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:13.145033   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:13.145054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:13.145067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:13.225074   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:13.225108   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:15.765512   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:15.778320   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:15.778380   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:15.812847   80228 cri.go:89] found id: ""
	I0814 17:39:15.812876   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.812885   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:15.812896   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:15.812944   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:15.845131   80228 cri.go:89] found id: ""
	I0814 17:39:15.845159   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.845169   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:15.845176   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:15.845242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:15.879763   80228 cri.go:89] found id: ""
	I0814 17:39:15.879789   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.879799   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:15.879807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:15.879864   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:15.912746   80228 cri.go:89] found id: ""
	I0814 17:39:15.912776   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.912784   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:15.912797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:15.912858   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:15.946433   80228 cri.go:89] found id: ""
	I0814 17:39:15.946456   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.946465   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:15.946473   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:15.946534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:15.980060   80228 cri.go:89] found id: ""
	I0814 17:39:15.980086   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.980096   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:15.980103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:15.980167   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:16.011539   80228 cri.go:89] found id: ""
	I0814 17:39:16.011570   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.011581   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:16.011590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:16.011660   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:16.046019   80228 cri.go:89] found id: ""
	I0814 17:39:16.046046   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.046057   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:16.046068   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:16.046083   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:16.058442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:16.058470   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:16.132775   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:16.132799   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:16.132811   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:16.218360   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:16.218398   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:16.258070   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:16.258096   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.013275   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:15.013967   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:15.444280   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:17.444827   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.943845   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:16.760773   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.259891   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:18.813127   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:18.826187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:18.826267   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:18.858405   80228 cri.go:89] found id: ""
	I0814 17:39:18.858433   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.858444   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:18.858452   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:18.858524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:18.893302   80228 cri.go:89] found id: ""
	I0814 17:39:18.893335   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.893342   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:18.893350   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:18.893417   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:18.929885   80228 cri.go:89] found id: ""
	I0814 17:39:18.929919   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.929929   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:18.929937   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:18.930000   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:18.966758   80228 cri.go:89] found id: ""
	I0814 17:39:18.966783   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.966792   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:18.966799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:18.966861   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:18.999815   80228 cri.go:89] found id: ""
	I0814 17:39:18.999838   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.999845   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:18.999851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:18.999915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:19.033737   80228 cri.go:89] found id: ""
	I0814 17:39:19.033761   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.033768   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:19.033774   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:19.033830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:19.070080   80228 cri.go:89] found id: ""
	I0814 17:39:19.070105   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.070113   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:19.070119   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:19.070190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:19.102868   80228 cri.go:89] found id: ""
	I0814 17:39:19.102897   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.102907   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:19.102918   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:19.102932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:19.156525   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:19.156569   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:19.170193   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:19.170225   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:19.236521   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:19.236547   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:19.236561   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:19.315984   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:19.316024   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:17.512553   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.513046   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.513082   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:22.444948   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:24.945111   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.260362   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:23.260567   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.855554   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:21.868457   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:21.868527   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:21.902098   80228 cri.go:89] found id: ""
	I0814 17:39:21.902124   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.902132   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:21.902139   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:21.902200   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:21.934876   80228 cri.go:89] found id: ""
	I0814 17:39:21.934908   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.934919   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:21.934926   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:21.934987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:21.976507   80228 cri.go:89] found id: ""
	I0814 17:39:21.976536   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.976548   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:21.976555   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:21.976617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:22.013876   80228 cri.go:89] found id: ""
	I0814 17:39:22.013897   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.013904   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:22.013909   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:22.013955   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:22.051943   80228 cri.go:89] found id: ""
	I0814 17:39:22.051969   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.051979   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:22.051999   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:22.052064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:22.084702   80228 cri.go:89] found id: ""
	I0814 17:39:22.084725   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.084733   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:22.084738   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:22.084784   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:22.117397   80228 cri.go:89] found id: ""
	I0814 17:39:22.117424   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.117432   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:22.117439   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:22.117490   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:22.154139   80228 cri.go:89] found id: ""
	I0814 17:39:22.154168   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.154178   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:22.154189   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:22.154201   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:22.205550   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:22.205580   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:22.219644   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:22.219679   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:22.288934   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:22.288957   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:22.288969   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:22.372917   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:22.372954   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:24.912578   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:24.925365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:24.925430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:24.961207   80228 cri.go:89] found id: ""
	I0814 17:39:24.961234   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.961248   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:24.961257   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:24.961339   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:24.998878   80228 cri.go:89] found id: ""
	I0814 17:39:24.998904   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.998911   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:24.998918   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:24.998971   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:25.034141   80228 cri.go:89] found id: ""
	I0814 17:39:25.034174   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.034187   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:25.034196   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:25.034274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:25.075634   80228 cri.go:89] found id: ""
	I0814 17:39:25.075667   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.075679   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:25.075688   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:25.075759   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:25.112890   80228 cri.go:89] found id: ""
	I0814 17:39:25.112929   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.112939   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:25.112946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:25.113007   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:25.152887   80228 cri.go:89] found id: ""
	I0814 17:39:25.152913   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.152921   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:25.152927   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:25.152987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:25.186421   80228 cri.go:89] found id: ""
	I0814 17:39:25.186452   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.186463   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:25.186471   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:25.186537   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:25.220390   80228 cri.go:89] found id: ""
	I0814 17:39:25.220417   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.220425   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:25.220432   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:25.220446   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:25.296112   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:25.296146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:25.335421   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:25.335449   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:25.387690   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:25.387718   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:25.401926   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:25.401957   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:25.471111   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:24.012534   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:26.513529   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.445280   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:29.445416   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:25.759098   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.759924   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:30.259610   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.972237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:27.985512   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:27.985575   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:28.019454   80228 cri.go:89] found id: ""
	I0814 17:39:28.019482   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.019493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:28.019502   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:28.019566   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:28.056908   80228 cri.go:89] found id: ""
	I0814 17:39:28.056931   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.056939   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:28.056944   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:28.056998   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:28.090678   80228 cri.go:89] found id: ""
	I0814 17:39:28.090707   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.090715   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:28.090721   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:28.090785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:28.125557   80228 cri.go:89] found id: ""
	I0814 17:39:28.125591   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.125609   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:28.125620   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:28.125682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:28.158092   80228 cri.go:89] found id: ""
	I0814 17:39:28.158121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.158129   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:28.158135   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:28.158191   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:28.193403   80228 cri.go:89] found id: ""
	I0814 17:39:28.193434   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.193445   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:28.193454   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:28.193524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:28.231095   80228 cri.go:89] found id: ""
	I0814 17:39:28.231121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.231131   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:28.231139   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:28.231203   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:28.280157   80228 cri.go:89] found id: ""
	I0814 17:39:28.280185   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.280196   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:28.280207   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:28.280220   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:28.352877   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:28.352894   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:28.352906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:28.439692   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:28.439736   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:28.479986   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:28.480012   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:28.538454   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:28.538493   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.052941   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:31.065810   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:31.065879   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:31.097988   80228 cri.go:89] found id: ""
	I0814 17:39:31.098013   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.098020   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:31.098045   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:31.098102   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:31.130126   80228 cri.go:89] found id: ""
	I0814 17:39:31.130152   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.130160   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:31.130166   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:31.130225   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:31.165945   80228 cri.go:89] found id: ""
	I0814 17:39:31.165984   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.165995   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:31.166003   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:31.166064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:31.199749   80228 cri.go:89] found id: ""
	I0814 17:39:31.199772   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.199778   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:31.199784   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:31.199843   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:31.231398   80228 cri.go:89] found id: ""
	I0814 17:39:31.231425   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.231436   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:31.231444   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:31.231528   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:31.263842   80228 cri.go:89] found id: ""
	I0814 17:39:31.263868   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.263878   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:31.263885   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:31.263950   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:31.299258   80228 cri.go:89] found id: ""
	I0814 17:39:31.299289   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.299301   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:31.299309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:31.299399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:29.013468   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.013638   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.445769   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:33.944939   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:32.260117   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:34.262303   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.332626   80228 cri.go:89] found id: ""
	I0814 17:39:31.332649   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.332657   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:31.332666   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:31.332678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:31.369262   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:31.369288   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:31.426003   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:31.426034   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.439583   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:31.439611   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:31.507538   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:31.507563   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:31.507583   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.085342   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:34.097491   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:34.097567   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:34.129220   80228 cri.go:89] found id: ""
	I0814 17:39:34.129244   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.129254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:34.129262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:34.129322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:34.161233   80228 cri.go:89] found id: ""
	I0814 17:39:34.161256   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.161264   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:34.161270   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:34.161334   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:34.193649   80228 cri.go:89] found id: ""
	I0814 17:39:34.193675   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.193683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:34.193689   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:34.193754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:34.226722   80228 cri.go:89] found id: ""
	I0814 17:39:34.226753   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.226763   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:34.226772   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:34.226842   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:34.259735   80228 cri.go:89] found id: ""
	I0814 17:39:34.259761   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.259774   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:34.259787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:34.259850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:34.296804   80228 cri.go:89] found id: ""
	I0814 17:39:34.296830   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.296838   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:34.296844   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:34.296894   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:34.328942   80228 cri.go:89] found id: ""
	I0814 17:39:34.328973   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.328982   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:34.328988   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:34.329041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:34.360820   80228 cri.go:89] found id: ""
	I0814 17:39:34.360847   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.360858   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:34.360868   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:34.360882   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:34.411106   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:34.411142   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:34.424737   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:34.424769   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:34.489094   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:34.489122   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:34.489138   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.569783   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:34.569818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:33.015308   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:35.513073   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:35.945264   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:38.444913   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:36.760740   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:39.260499   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:37.107492   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:37.120829   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:37.120901   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:37.154556   80228 cri.go:89] found id: ""
	I0814 17:39:37.154589   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.154601   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:37.154609   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:37.154673   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:37.192570   80228 cri.go:89] found id: ""
	I0814 17:39:37.192602   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.192609   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:37.192615   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:37.192679   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:37.225845   80228 cri.go:89] found id: ""
	I0814 17:39:37.225891   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.225902   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:37.225917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:37.225986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:37.262370   80228 cri.go:89] found id: ""
	I0814 17:39:37.262399   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.262408   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:37.262416   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:37.262481   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:37.297642   80228 cri.go:89] found id: ""
	I0814 17:39:37.297669   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.297680   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:37.297687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:37.297754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:37.331006   80228 cri.go:89] found id: ""
	I0814 17:39:37.331032   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.331041   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:37.331046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:37.331111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:37.364753   80228 cri.go:89] found id: ""
	I0814 17:39:37.364777   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.364786   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:37.364792   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:37.364850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:37.397722   80228 cri.go:89] found id: ""
	I0814 17:39:37.397748   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.397760   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:37.397770   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:37.397785   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:37.473616   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:37.473643   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:37.473659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:37.557672   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:37.557710   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.596337   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:37.596368   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:37.646815   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:37.646853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.160391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:40.174099   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:40.174181   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:40.208783   80228 cri.go:89] found id: ""
	I0814 17:39:40.208814   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.208821   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:40.208829   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:40.208880   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:40.243555   80228 cri.go:89] found id: ""
	I0814 17:39:40.243580   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.243588   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:40.243594   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:40.243661   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:40.276685   80228 cri.go:89] found id: ""
	I0814 17:39:40.276711   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.276723   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:40.276731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:40.276795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:40.309893   80228 cri.go:89] found id: ""
	I0814 17:39:40.309925   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.309937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:40.309944   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:40.310073   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:40.341724   80228 cri.go:89] found id: ""
	I0814 17:39:40.341751   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.341762   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:40.341770   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:40.341834   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:40.376442   80228 cri.go:89] found id: ""
	I0814 17:39:40.376478   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.376487   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:40.376495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:40.376558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:40.419240   80228 cri.go:89] found id: ""
	I0814 17:39:40.419269   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.419277   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:40.419284   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:40.419374   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:40.464678   80228 cri.go:89] found id: ""
	I0814 17:39:40.464703   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.464712   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:40.464721   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:40.464737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:40.531138   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:40.531175   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.546809   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:40.546842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:40.618791   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:40.618809   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:40.618821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:40.706169   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:40.706219   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.513604   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:40.013349   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:40.445989   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:42.944417   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:41.261429   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:43.760436   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:43.250987   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:43.266109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:43.266179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:43.301860   80228 cri.go:89] found id: ""
	I0814 17:39:43.301891   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.301899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:43.301908   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:43.301991   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:43.337166   80228 cri.go:89] found id: ""
	I0814 17:39:43.337195   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.337205   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:43.337212   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:43.337262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:43.370640   80228 cri.go:89] found id: ""
	I0814 17:39:43.370671   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.370683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:43.370696   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:43.370752   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:43.405598   80228 cri.go:89] found id: ""
	I0814 17:39:43.405624   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.405632   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:43.405638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:43.405705   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:43.437161   80228 cri.go:89] found id: ""
	I0814 17:39:43.437184   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.437192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:43.437198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:43.437295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:43.470675   80228 cri.go:89] found id: ""
	I0814 17:39:43.470707   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.470718   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:43.470726   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:43.470787   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:43.503036   80228 cri.go:89] found id: ""
	I0814 17:39:43.503062   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.503073   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:43.503081   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:43.503149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:43.538269   80228 cri.go:89] found id: ""
	I0814 17:39:43.538296   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.538304   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:43.538328   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:43.538340   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:43.621889   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:43.621936   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:43.667460   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:43.667491   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:43.723630   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:43.723663   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:43.738905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:43.738939   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:43.805484   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:46.306031   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:42.512438   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:44.513112   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.513203   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:45.445470   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:47.944790   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.260236   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:48.260662   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.324624   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:46.324696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:46.360039   80228 cri.go:89] found id: ""
	I0814 17:39:46.360066   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.360074   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:46.360082   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:46.360131   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:46.413735   80228 cri.go:89] found id: ""
	I0814 17:39:46.413767   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.413779   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:46.413788   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:46.413876   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:46.458823   80228 cri.go:89] found id: ""
	I0814 17:39:46.458851   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.458861   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:46.458869   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:46.458928   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:46.495347   80228 cri.go:89] found id: ""
	I0814 17:39:46.495378   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.495387   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:46.495392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:46.495441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:46.531502   80228 cri.go:89] found id: ""
	I0814 17:39:46.531533   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.531545   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:46.531554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:46.531624   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:46.564450   80228 cri.go:89] found id: ""
	I0814 17:39:46.564473   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.564482   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:46.564488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:46.564535   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:46.598293   80228 cri.go:89] found id: ""
	I0814 17:39:46.598401   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.598421   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:46.598431   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:46.598498   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:46.632370   80228 cri.go:89] found id: ""
	I0814 17:39:46.632400   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.632411   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:46.632423   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:46.632438   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:46.711814   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:46.711848   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:46.749410   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:46.749443   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:46.801686   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:46.801720   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:46.815196   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:46.815218   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:46.885648   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.386223   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:49.399359   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:49.399430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:49.432133   80228 cri.go:89] found id: ""
	I0814 17:39:49.432168   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.432179   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:49.432186   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:49.432250   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:49.469760   80228 cri.go:89] found id: ""
	I0814 17:39:49.469790   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.469799   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:49.469811   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:49.469873   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:49.500437   80228 cri.go:89] found id: ""
	I0814 17:39:49.500466   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.500474   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:49.500481   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:49.500531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:49.533685   80228 cri.go:89] found id: ""
	I0814 17:39:49.533709   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.533717   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:49.533723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:49.533790   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:49.570551   80228 cri.go:89] found id: ""
	I0814 17:39:49.570577   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.570584   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:49.570590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:49.570654   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:49.606649   80228 cri.go:89] found id: ""
	I0814 17:39:49.606672   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.606680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:49.606686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:49.606734   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:49.638060   80228 cri.go:89] found id: ""
	I0814 17:39:49.638090   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.638101   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:49.638109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:49.638178   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:49.674503   80228 cri.go:89] found id: ""
	I0814 17:39:49.674526   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.674534   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:49.674543   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:49.674563   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:49.710185   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:49.710213   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:49.764112   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:49.764146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:49.777862   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:49.777888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:49.849786   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.849806   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:49.849819   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:48.513418   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:51.013242   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:50.444526   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.444788   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:54.944646   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:50.759890   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.760236   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:54.760324   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.429811   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:52.444364   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:52.444441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:52.483047   80228 cri.go:89] found id: ""
	I0814 17:39:52.483074   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.483085   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:52.483093   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:52.483157   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:52.520236   80228 cri.go:89] found id: ""
	I0814 17:39:52.520264   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.520274   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:52.520287   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:52.520353   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:52.553757   80228 cri.go:89] found id: ""
	I0814 17:39:52.553784   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.553795   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:52.553802   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:52.553869   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:52.588782   80228 cri.go:89] found id: ""
	I0814 17:39:52.588808   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.588818   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:52.588827   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:52.588893   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:52.620144   80228 cri.go:89] found id: ""
	I0814 17:39:52.620180   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.620192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:52.620201   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:52.620274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:52.652712   80228 cri.go:89] found id: ""
	I0814 17:39:52.652743   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.652755   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:52.652763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:52.652825   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:52.687789   80228 cri.go:89] found id: ""
	I0814 17:39:52.687819   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.687831   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:52.687838   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:52.687892   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:52.718996   80228 cri.go:89] found id: ""
	I0814 17:39:52.719021   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.719031   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:52.719041   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:52.719055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:52.775775   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:52.775808   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:52.789024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:52.789055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:52.863320   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:52.863351   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:52.863366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:52.941533   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:52.941571   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:55.477833   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:55.490723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:55.490783   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:55.525816   80228 cri.go:89] found id: ""
	I0814 17:39:55.525844   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.525852   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:55.525859   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:55.525908   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:55.561855   80228 cri.go:89] found id: ""
	I0814 17:39:55.561878   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.561887   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:55.561892   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:55.561949   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:55.599997   80228 cri.go:89] found id: ""
	I0814 17:39:55.600027   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.600038   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:55.600046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:55.600112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:55.632869   80228 cri.go:89] found id: ""
	I0814 17:39:55.632902   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.632914   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:55.632922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:55.632990   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:55.666029   80228 cri.go:89] found id: ""
	I0814 17:39:55.666055   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.666066   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:55.666079   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:55.666136   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:55.697222   80228 cri.go:89] found id: ""
	I0814 17:39:55.697247   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.697254   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:55.697260   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:55.697308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:55.729517   80228 cri.go:89] found id: ""
	I0814 17:39:55.729549   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.729561   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:55.729576   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:55.729640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:55.763890   80228 cri.go:89] found id: ""
	I0814 17:39:55.763922   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.763934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:55.763944   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:55.763960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:55.819588   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:55.819624   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:55.833281   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:55.833314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:55.904610   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:55.904632   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:55.904644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:55.981035   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:55.981069   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:53.513407   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:55.513734   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:56.945649   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:59.444937   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:57.259832   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:59.760669   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:58.522870   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:58.536151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:58.536224   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:58.568827   80228 cri.go:89] found id: ""
	I0814 17:39:58.568857   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.568869   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:58.568877   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:58.568946   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:58.600523   80228 cri.go:89] found id: ""
	I0814 17:39:58.600554   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.600564   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:58.600571   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:58.600640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:58.634201   80228 cri.go:89] found id: ""
	I0814 17:39:58.634232   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.634240   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:58.634245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:58.634308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:58.668746   80228 cri.go:89] found id: ""
	I0814 17:39:58.668772   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.668781   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:58.668787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:58.668847   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:58.699695   80228 cri.go:89] found id: ""
	I0814 17:39:58.699727   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.699739   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:58.699752   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:58.699836   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:58.731047   80228 cri.go:89] found id: ""
	I0814 17:39:58.731081   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.731095   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:58.731103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:58.731168   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:58.773454   80228 cri.go:89] found id: ""
	I0814 17:39:58.773486   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.773495   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:58.773501   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:58.773561   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:58.810135   80228 cri.go:89] found id: ""
	I0814 17:39:58.810159   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.810166   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:58.810175   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:58.810191   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:58.844897   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:58.844925   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:58.901700   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:58.901745   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:58.914272   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:58.914296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:58.984593   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:58.984610   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:58.984622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:57.513854   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:00.013241   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:01.945861   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.444575   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:02.262241   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.760164   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:01.563227   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:01.576764   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:01.576840   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:01.610842   80228 cri.go:89] found id: ""
	I0814 17:40:01.610871   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.610878   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:01.610884   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:01.610935   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:01.643774   80228 cri.go:89] found id: ""
	I0814 17:40:01.643806   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.643816   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:01.643824   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:01.643888   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:01.677867   80228 cri.go:89] found id: ""
	I0814 17:40:01.677892   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.677899   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:01.677906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:01.677967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:01.712394   80228 cri.go:89] found id: ""
	I0814 17:40:01.712420   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.712427   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:01.712433   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:01.712492   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:01.745637   80228 cri.go:89] found id: ""
	I0814 17:40:01.745666   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.745676   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:01.745683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:01.745745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:01.782364   80228 cri.go:89] found id: ""
	I0814 17:40:01.782394   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.782404   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:01.782411   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:01.782484   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:01.814569   80228 cri.go:89] found id: ""
	I0814 17:40:01.814596   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.814605   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:01.814614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:01.814674   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:01.850421   80228 cri.go:89] found id: ""
	I0814 17:40:01.850450   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.850459   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:01.850468   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:01.850482   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:01.862965   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:01.863001   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:01.931312   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:01.931357   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:01.931375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:02.008236   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:02.008278   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:02.043238   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:02.043267   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:04.596909   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:04.610091   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:04.610158   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:04.645169   80228 cri.go:89] found id: ""
	I0814 17:40:04.645195   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.645205   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:04.645213   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:04.645279   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:04.677708   80228 cri.go:89] found id: ""
	I0814 17:40:04.677740   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.677750   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:04.677761   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:04.677823   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:04.710319   80228 cri.go:89] found id: ""
	I0814 17:40:04.710351   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.710362   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:04.710374   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:04.710443   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:04.745166   80228 cri.go:89] found id: ""
	I0814 17:40:04.745202   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.745219   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:04.745226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:04.745287   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:04.777307   80228 cri.go:89] found id: ""
	I0814 17:40:04.777354   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.777376   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:04.777383   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:04.777447   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:04.813854   80228 cri.go:89] found id: ""
	I0814 17:40:04.813886   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.813901   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:04.813908   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:04.813972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:04.848014   80228 cri.go:89] found id: ""
	I0814 17:40:04.848041   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.848049   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:04.848055   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:04.848113   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:04.882689   80228 cri.go:89] found id: ""
	I0814 17:40:04.882719   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.882731   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:04.882742   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:04.882760   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:04.952074   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:04.952096   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:04.952112   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:05.030258   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:05.030300   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:05.066509   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:05.066542   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:05.120153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:05.120195   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:02.512935   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.513254   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:06.445637   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:08.945142   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:06.760223   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:08.760857   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:07.634404   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:07.646900   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:07.646966   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:07.678654   80228 cri.go:89] found id: ""
	I0814 17:40:07.678680   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.678689   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:07.678696   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:07.678753   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:07.711355   80228 cri.go:89] found id: ""
	I0814 17:40:07.711381   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.711389   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:07.711395   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:07.711446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:07.744134   80228 cri.go:89] found id: ""
	I0814 17:40:07.744161   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.744169   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:07.744179   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:07.744242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:07.776981   80228 cri.go:89] found id: ""
	I0814 17:40:07.777008   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.777015   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:07.777022   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:07.777086   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:07.811626   80228 cri.go:89] found id: ""
	I0814 17:40:07.811651   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.811661   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:07.811667   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:07.811720   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:07.843218   80228 cri.go:89] found id: ""
	I0814 17:40:07.843251   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.843262   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:07.843270   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:07.843355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:07.875208   80228 cri.go:89] found id: ""
	I0814 17:40:07.875232   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.875239   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:07.875245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:07.875295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:07.907896   80228 cri.go:89] found id: ""
	I0814 17:40:07.907923   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.907934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:07.907945   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:07.907960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:07.959717   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:07.959753   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:07.973050   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:07.973081   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:08.035085   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:08.035107   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:08.035120   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:08.109722   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:08.109770   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:10.648203   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:10.661194   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:10.661280   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:10.698401   80228 cri.go:89] found id: ""
	I0814 17:40:10.698431   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.698442   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:10.698450   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:10.698515   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:10.730057   80228 cri.go:89] found id: ""
	I0814 17:40:10.730083   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.730094   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:10.730101   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:10.730163   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:10.768780   80228 cri.go:89] found id: ""
	I0814 17:40:10.768807   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.768817   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:10.768824   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:10.768885   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:10.800866   80228 cri.go:89] found id: ""
	I0814 17:40:10.800898   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.800907   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:10.800917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:10.800984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:10.833741   80228 cri.go:89] found id: ""
	I0814 17:40:10.833771   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.833782   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:10.833789   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:10.833850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:10.865670   80228 cri.go:89] found id: ""
	I0814 17:40:10.865699   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.865706   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:10.865717   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:10.865770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:10.904726   80228 cri.go:89] found id: ""
	I0814 17:40:10.904757   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.904765   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:10.904771   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:10.904821   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:10.940549   80228 cri.go:89] found id: ""
	I0814 17:40:10.940578   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.940588   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:10.940598   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:10.940620   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:10.992592   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:10.992622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:11.006388   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:11.006412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:11.075455   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:11.075473   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:11.075486   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:11.156622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:11.156658   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:07.012878   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:09.013908   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.512592   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.444764   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.944931   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.259959   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.760823   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.695055   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:13.709460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:13.709531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:13.741941   80228 cri.go:89] found id: ""
	I0814 17:40:13.741967   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.741975   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:13.741981   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:13.742042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:13.773916   80228 cri.go:89] found id: ""
	I0814 17:40:13.773940   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.773947   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:13.773952   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:13.773999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:13.807871   80228 cri.go:89] found id: ""
	I0814 17:40:13.807902   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.807912   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:13.807918   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:13.807981   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:13.840902   80228 cri.go:89] found id: ""
	I0814 17:40:13.840931   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.840943   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:13.840952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:13.841018   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:13.871969   80228 cri.go:89] found id: ""
	I0814 17:40:13.871998   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.872010   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:13.872019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:13.872090   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:13.905502   80228 cri.go:89] found id: ""
	I0814 17:40:13.905524   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.905531   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:13.905537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:13.905599   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:13.937356   80228 cri.go:89] found id: ""
	I0814 17:40:13.937386   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.937396   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:13.937404   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:13.937466   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:13.972383   80228 cri.go:89] found id: ""
	I0814 17:40:13.972410   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.972418   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:13.972427   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:13.972448   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:14.022691   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:14.022717   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:14.035543   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:14.035567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:14.104869   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:14.104889   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:14.104905   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:14.182185   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:14.182221   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:13.513519   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.012958   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:15.945499   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:18.445122   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.259488   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:18.259706   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.259972   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.720519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:16.734323   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:16.734406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:16.769454   80228 cri.go:89] found id: ""
	I0814 17:40:16.769483   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.769493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:16.769501   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:16.769565   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:16.801513   80228 cri.go:89] found id: ""
	I0814 17:40:16.801541   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.801548   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:16.801554   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:16.801610   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:16.835184   80228 cri.go:89] found id: ""
	I0814 17:40:16.835212   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.835220   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:16.835226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:16.835275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:16.867162   80228 cri.go:89] found id: ""
	I0814 17:40:16.867192   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.867201   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:16.867207   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:16.867257   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:16.902912   80228 cri.go:89] found id: ""
	I0814 17:40:16.902942   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.902953   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:16.902961   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:16.903026   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:16.935004   80228 cri.go:89] found id: ""
	I0814 17:40:16.935033   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.935044   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:16.935052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:16.935115   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:16.969082   80228 cri.go:89] found id: ""
	I0814 17:40:16.969110   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.969120   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:16.969127   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:16.969194   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:17.002594   80228 cri.go:89] found id: ""
	I0814 17:40:17.002622   80228 logs.go:276] 0 containers: []
	W0814 17:40:17.002633   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:17.002644   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:17.002659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:17.054319   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:17.054359   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:17.068024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:17.068048   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:17.139480   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:17.139499   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:17.139514   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:17.222086   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:17.222140   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:19.758630   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:19.772186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:19.772254   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:19.807719   80228 cri.go:89] found id: ""
	I0814 17:40:19.807751   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.807760   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:19.807766   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:19.807830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:19.851023   80228 cri.go:89] found id: ""
	I0814 17:40:19.851054   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.851067   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:19.851083   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:19.851154   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:19.882961   80228 cri.go:89] found id: ""
	I0814 17:40:19.882987   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.882997   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:19.883005   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:19.883063   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:19.920312   80228 cri.go:89] found id: ""
	I0814 17:40:19.920345   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.920356   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:19.920365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:19.920430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:19.953628   80228 cri.go:89] found id: ""
	I0814 17:40:19.953658   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.953671   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:19.953683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:19.953741   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:19.984998   80228 cri.go:89] found id: ""
	I0814 17:40:19.985028   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.985036   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:19.985043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:19.985092   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:20.018728   80228 cri.go:89] found id: ""
	I0814 17:40:20.018753   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.018761   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:20.018766   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:20.018814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:20.050718   80228 cri.go:89] found id: ""
	I0814 17:40:20.050743   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.050757   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:20.050765   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:20.050777   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:20.101567   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:20.101602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:20.114890   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:20.114920   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:20.183926   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:20.183948   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:20.183960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:20.270195   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:20.270223   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:18.513348   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.513633   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.445352   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.945704   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.260365   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:24.760475   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.807078   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:22.820187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:22.820260   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:22.852474   80228 cri.go:89] found id: ""
	I0814 17:40:22.852504   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.852514   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:22.852522   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:22.852596   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:22.887141   80228 cri.go:89] found id: ""
	I0814 17:40:22.887167   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.887177   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:22.887184   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:22.887248   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:22.919384   80228 cri.go:89] found id: ""
	I0814 17:40:22.919417   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.919428   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:22.919436   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:22.919502   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:22.951877   80228 cri.go:89] found id: ""
	I0814 17:40:22.951897   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.951905   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:22.951910   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:22.951965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:22.987712   80228 cri.go:89] found id: ""
	I0814 17:40:22.987742   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.987752   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:22.987760   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:22.987832   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:23.025562   80228 cri.go:89] found id: ""
	I0814 17:40:23.025597   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.025608   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:23.025616   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:23.025680   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:23.058928   80228 cri.go:89] found id: ""
	I0814 17:40:23.058955   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.058962   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:23.058969   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:23.059025   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:23.096807   80228 cri.go:89] found id: ""
	I0814 17:40:23.096836   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.096847   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:23.096858   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:23.096874   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:23.148943   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:23.148977   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:23.161905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:23.161927   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:23.232119   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:23.232147   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:23.232160   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:23.320693   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:23.320731   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:25.858506   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:25.871891   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:25.871964   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:25.904732   80228 cri.go:89] found id: ""
	I0814 17:40:25.904760   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.904769   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:25.904775   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:25.904830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:25.936317   80228 cri.go:89] found id: ""
	I0814 17:40:25.936347   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.936358   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:25.936365   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:25.936427   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:25.969921   80228 cri.go:89] found id: ""
	I0814 17:40:25.969946   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.969954   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:25.969960   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:25.970009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:26.022832   80228 cri.go:89] found id: ""
	I0814 17:40:26.022862   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.022872   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:26.022880   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:26.022941   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:26.056178   80228 cri.go:89] found id: ""
	I0814 17:40:26.056206   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.056214   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:26.056224   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:26.056275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:26.086921   80228 cri.go:89] found id: ""
	I0814 17:40:26.086955   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.086966   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:26.086974   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:26.087031   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:26.120631   80228 cri.go:89] found id: ""
	I0814 17:40:26.120665   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.120677   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:26.120686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:26.120745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:26.154258   80228 cri.go:89] found id: ""
	I0814 17:40:26.154289   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.154300   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:26.154310   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:26.154324   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:26.208366   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:26.208405   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:26.222160   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:26.222192   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:26.294737   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:26.294756   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:26.294768   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:22.513813   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:25.013707   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:25.444691   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:27.944277   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.945043   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:27.260184   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.262080   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:26.372870   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:26.372906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:28.908165   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:28.920754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:28.920816   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:28.953950   80228 cri.go:89] found id: ""
	I0814 17:40:28.953971   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.953978   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:28.953987   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:28.954035   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:28.985228   80228 cri.go:89] found id: ""
	I0814 17:40:28.985266   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.985278   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:28.985286   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:28.985347   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:29.016295   80228 cri.go:89] found id: ""
	I0814 17:40:29.016328   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.016336   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:29.016341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:29.016392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:29.048664   80228 cri.go:89] found id: ""
	I0814 17:40:29.048696   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.048707   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:29.048715   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:29.048778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:29.080441   80228 cri.go:89] found id: ""
	I0814 17:40:29.080466   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.080474   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:29.080520   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:29.080584   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:29.112377   80228 cri.go:89] found id: ""
	I0814 17:40:29.112407   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.112418   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:29.112426   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:29.112493   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:29.145368   80228 cri.go:89] found id: ""
	I0814 17:40:29.145395   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.145403   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:29.145409   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:29.145471   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:29.177305   80228 cri.go:89] found id: ""
	I0814 17:40:29.177333   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.177341   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:29.177350   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:29.177366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:29.232156   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:29.232197   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:29.245286   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:29.245317   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:29.322257   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:29.322286   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:29.322302   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:29.397679   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:29.397714   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:27.512862   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.514756   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.945087   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.444743   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.760242   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.259825   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.935264   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:31.948380   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:31.948446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:31.978898   80228 cri.go:89] found id: ""
	I0814 17:40:31.978925   80228 logs.go:276] 0 containers: []
	W0814 17:40:31.978932   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:31.978939   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:31.978989   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:32.010652   80228 cri.go:89] found id: ""
	I0814 17:40:32.010681   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.010692   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:32.010699   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:32.010767   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:32.044821   80228 cri.go:89] found id: ""
	I0814 17:40:32.044852   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.044860   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:32.044866   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:32.044915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:32.076359   80228 cri.go:89] found id: ""
	I0814 17:40:32.076388   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.076398   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:32.076406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:32.076469   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:32.107652   80228 cri.go:89] found id: ""
	I0814 17:40:32.107680   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.107692   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:32.107709   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:32.107770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:32.138445   80228 cri.go:89] found id: ""
	I0814 17:40:32.138473   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.138484   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:32.138492   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:32.138558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:32.173771   80228 cri.go:89] found id: ""
	I0814 17:40:32.173794   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.173802   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:32.173807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:32.173857   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:32.206387   80228 cri.go:89] found id: ""
	I0814 17:40:32.206418   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.206429   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:32.206441   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:32.206454   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.258114   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:32.258148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:32.271984   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:32.272009   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:32.335423   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:32.335447   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:32.335464   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:32.411155   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:32.411206   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:34.975280   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:34.988098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:34.988176   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:35.022020   80228 cri.go:89] found id: ""
	I0814 17:40:35.022047   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.022062   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:35.022071   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:35.022124   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:35.055528   80228 cri.go:89] found id: ""
	I0814 17:40:35.055568   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.055578   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:35.055586   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:35.055647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:35.088373   80228 cri.go:89] found id: ""
	I0814 17:40:35.088404   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.088415   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:35.088422   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:35.088489   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:35.123162   80228 cri.go:89] found id: ""
	I0814 17:40:35.123188   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.123198   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:35.123206   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:35.123268   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:35.160240   80228 cri.go:89] found id: ""
	I0814 17:40:35.160267   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.160277   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:35.160286   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:35.160348   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:35.196249   80228 cri.go:89] found id: ""
	I0814 17:40:35.196276   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.196285   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:35.196293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:35.196359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:35.232564   80228 cri.go:89] found id: ""
	I0814 17:40:35.232588   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.232598   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:35.232606   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:35.232671   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:35.267357   80228 cri.go:89] found id: ""
	I0814 17:40:35.267383   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.267392   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:35.267399   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:35.267412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:35.279779   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:35.279806   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:35.347748   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:35.347769   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:35.347782   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:35.427900   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:35.427932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:35.468925   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:35.468953   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.013942   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.513138   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:36.944749   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.444665   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:36.760292   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.260430   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:38.020581   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:38.034985   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:38.035066   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:38.070206   80228 cri.go:89] found id: ""
	I0814 17:40:38.070231   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.070240   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:38.070246   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:38.070294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:38.103859   80228 cri.go:89] found id: ""
	I0814 17:40:38.103885   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.103893   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:38.103898   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:38.103947   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:38.138247   80228 cri.go:89] found id: ""
	I0814 17:40:38.138271   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.138278   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:38.138285   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:38.138345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:38.179475   80228 cri.go:89] found id: ""
	I0814 17:40:38.179511   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.179520   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:38.179526   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:38.179578   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:38.224892   80228 cri.go:89] found id: ""
	I0814 17:40:38.224922   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.224932   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:38.224940   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:38.224996   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:38.270456   80228 cri.go:89] found id: ""
	I0814 17:40:38.270485   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.270497   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:38.270504   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:38.270569   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:38.305267   80228 cri.go:89] found id: ""
	I0814 17:40:38.305300   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.305308   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:38.305315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:38.305387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:38.336942   80228 cri.go:89] found id: ""
	I0814 17:40:38.336978   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.336989   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:38.337000   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:38.337016   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:38.388618   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:38.388651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:38.403442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:38.403472   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:38.478225   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:38.478256   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:38.478273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:38.553400   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:38.553440   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:41.089947   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:41.101989   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:41.102070   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:41.133743   80228 cri.go:89] found id: ""
	I0814 17:40:41.133767   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.133774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:41.133780   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:41.133828   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:41.169671   80228 cri.go:89] found id: ""
	I0814 17:40:41.169706   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.169714   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:41.169721   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:41.169773   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:41.203425   80228 cri.go:89] found id: ""
	I0814 17:40:41.203451   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.203459   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:41.203475   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:41.203534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:41.237031   80228 cri.go:89] found id: ""
	I0814 17:40:41.237064   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.237075   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:41.237084   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:41.237149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:41.271095   80228 cri.go:89] found id: ""
	I0814 17:40:41.271120   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.271128   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:41.271134   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:41.271190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:41.303640   80228 cri.go:89] found id: ""
	I0814 17:40:41.303672   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.303684   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:41.303692   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:41.303755   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:37.013555   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.013733   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.013910   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.943472   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:43.944582   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.261795   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:43.759672   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.336010   80228 cri.go:89] found id: ""
	I0814 17:40:41.336047   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.336062   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:41.336071   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:41.336140   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:41.370098   80228 cri.go:89] found id: ""
	I0814 17:40:41.370133   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.370143   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:41.370154   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:41.370168   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:41.420760   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:41.420794   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:41.433651   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:41.433678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:41.506623   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:41.506644   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:41.506657   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:41.591390   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:41.591426   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:44.130649   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:44.144362   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:44.144428   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:44.178485   80228 cri.go:89] found id: ""
	I0814 17:40:44.178516   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.178527   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:44.178535   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:44.178600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:44.214231   80228 cri.go:89] found id: ""
	I0814 17:40:44.214260   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.214268   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:44.214274   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:44.214336   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:44.248483   80228 cri.go:89] found id: ""
	I0814 17:40:44.248513   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.248524   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:44.248531   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:44.248600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:44.282445   80228 cri.go:89] found id: ""
	I0814 17:40:44.282472   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.282481   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:44.282493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:44.282560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:44.315141   80228 cri.go:89] found id: ""
	I0814 17:40:44.315169   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.315190   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:44.315198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:44.315259   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:44.346756   80228 cri.go:89] found id: ""
	I0814 17:40:44.346781   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.346789   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:44.346795   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:44.346853   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:44.378143   80228 cri.go:89] found id: ""
	I0814 17:40:44.378172   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.378183   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:44.378191   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:44.378255   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:44.411526   80228 cri.go:89] found id: ""
	I0814 17:40:44.411557   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.411567   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:44.411578   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:44.411592   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:44.459873   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:44.459913   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:44.473112   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:44.473148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:44.547514   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:44.547546   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:44.547579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:44.630377   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:44.630415   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:43.512113   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.512590   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.945080   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:47.946506   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.760626   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:48.260015   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.260186   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:47.173094   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:47.185854   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:47.185927   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:47.228755   80228 cri.go:89] found id: ""
	I0814 17:40:47.228781   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.228788   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:47.228795   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:47.228851   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:47.264986   80228 cri.go:89] found id: ""
	I0814 17:40:47.265020   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.265031   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:47.265037   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:47.265100   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:47.296900   80228 cri.go:89] found id: ""
	I0814 17:40:47.296929   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.296940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:47.296947   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:47.297009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:47.328120   80228 cri.go:89] found id: ""
	I0814 17:40:47.328147   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.328155   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:47.328161   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:47.328210   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:47.364147   80228 cri.go:89] found id: ""
	I0814 17:40:47.364171   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.364178   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:47.364184   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:47.364238   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:47.400466   80228 cri.go:89] found id: ""
	I0814 17:40:47.400493   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.400501   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:47.400507   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:47.400562   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:47.432681   80228 cri.go:89] found id: ""
	I0814 17:40:47.432713   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.432724   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:47.432732   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:47.432801   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:47.465466   80228 cri.go:89] found id: ""
	I0814 17:40:47.465498   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.465510   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:47.465522   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:47.465536   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:47.502076   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:47.502114   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:47.554451   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:47.554488   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:47.567658   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:47.567690   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:47.635805   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:47.635829   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:47.635844   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:50.215353   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:50.227723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:50.227795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:50.258250   80228 cri.go:89] found id: ""
	I0814 17:40:50.258276   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.258287   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:50.258296   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:50.258363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:50.291371   80228 cri.go:89] found id: ""
	I0814 17:40:50.291406   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.291416   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:50.291423   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:50.291479   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:50.321449   80228 cri.go:89] found id: ""
	I0814 17:40:50.321473   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.321481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:50.321486   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:50.321545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:50.351752   80228 cri.go:89] found id: ""
	I0814 17:40:50.351780   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.351791   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:50.351799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:50.351856   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:50.382022   80228 cri.go:89] found id: ""
	I0814 17:40:50.382050   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.382057   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:50.382063   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:50.382118   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:50.414057   80228 cri.go:89] found id: ""
	I0814 17:40:50.414083   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.414091   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:50.414098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:50.414156   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:50.447508   80228 cri.go:89] found id: ""
	I0814 17:40:50.447530   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.447537   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:50.447543   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:50.447606   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:50.487401   80228 cri.go:89] found id: ""
	I0814 17:40:50.487425   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.487434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:50.487442   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:50.487455   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:50.524404   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:50.524439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:50.578220   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:50.578256   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:50.591405   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:50.591431   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:50.657727   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:50.657750   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:50.657762   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:47.514490   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.012588   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.445363   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:52.944903   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:52.760728   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:54.760918   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:53.237985   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:53.250502   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:53.250572   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:53.285728   80228 cri.go:89] found id: ""
	I0814 17:40:53.285763   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.285774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:53.285784   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:53.285848   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:53.318195   80228 cri.go:89] found id: ""
	I0814 17:40:53.318231   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.318243   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:53.318252   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:53.318317   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:53.350259   80228 cri.go:89] found id: ""
	I0814 17:40:53.350291   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.350302   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:53.350310   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:53.350385   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:53.385894   80228 cri.go:89] found id: ""
	I0814 17:40:53.385920   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.385928   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:53.385934   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:53.385983   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:53.420851   80228 cri.go:89] found id: ""
	I0814 17:40:53.420878   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.420890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:53.420897   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:53.420963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:53.458332   80228 cri.go:89] found id: ""
	I0814 17:40:53.458370   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.458381   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:53.458392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:53.458465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:53.489719   80228 cri.go:89] found id: ""
	I0814 17:40:53.489750   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.489759   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:53.489765   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:53.489820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:53.522942   80228 cri.go:89] found id: ""
	I0814 17:40:53.522977   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.522988   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:53.522998   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:53.523013   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:53.599450   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:53.599492   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:53.637225   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:53.637254   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:53.688605   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:53.688647   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:53.704601   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:53.704633   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:53.775046   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.275201   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:56.288406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:56.288463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:52.013747   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:54.513735   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:56.514335   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:55.445462   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:57.447142   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:59.946025   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:57.261047   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:59.760136   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:56.322862   80228 cri.go:89] found id: ""
	I0814 17:40:56.322891   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.322899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:56.322905   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:56.322954   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:56.356214   80228 cri.go:89] found id: ""
	I0814 17:40:56.356243   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.356262   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:56.356268   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:56.356338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:56.388877   80228 cri.go:89] found id: ""
	I0814 17:40:56.388900   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.388909   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:56.388915   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:56.388967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:56.422552   80228 cri.go:89] found id: ""
	I0814 17:40:56.422577   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.422585   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:56.422590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:56.422649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:56.456995   80228 cri.go:89] found id: ""
	I0814 17:40:56.457018   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.457026   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:56.457031   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:56.457079   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:56.495745   80228 cri.go:89] found id: ""
	I0814 17:40:56.495772   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.495788   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:56.495797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:56.495868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:56.529139   80228 cri.go:89] found id: ""
	I0814 17:40:56.529171   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.529179   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:56.529185   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:56.529237   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:56.561377   80228 cri.go:89] found id: ""
	I0814 17:40:56.561406   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.561414   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:56.561424   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:56.561439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:56.601504   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:56.601537   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:56.653369   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:56.653403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:56.666117   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:56.666144   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:56.731921   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.731949   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:56.731963   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.315712   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:59.328425   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:59.328486   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:59.364056   80228 cri.go:89] found id: ""
	I0814 17:40:59.364080   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.364088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:59.364094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:59.364151   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:59.398948   80228 cri.go:89] found id: ""
	I0814 17:40:59.398971   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.398978   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:59.398984   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:59.399029   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:59.430301   80228 cri.go:89] found id: ""
	I0814 17:40:59.430327   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.430335   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:59.430341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:59.430406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:59.465278   80228 cri.go:89] found id: ""
	I0814 17:40:59.465301   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.465309   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:59.465315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:59.465372   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:59.497544   80228 cri.go:89] found id: ""
	I0814 17:40:59.497575   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.497586   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:59.497595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:59.497659   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:59.529463   80228 cri.go:89] found id: ""
	I0814 17:40:59.529494   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.529506   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:59.529513   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:59.529587   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:59.562448   80228 cri.go:89] found id: ""
	I0814 17:40:59.562477   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.562487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:59.562495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:59.562609   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:59.594059   80228 cri.go:89] found id: ""
	I0814 17:40:59.594089   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.594103   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:59.594112   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:59.594123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.672139   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:59.672172   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:59.710714   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:59.710743   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:59.762645   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:59.762676   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:59.776006   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:59.776033   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:59.838187   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:59.013030   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:01.013280   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.445595   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:04.944484   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.260244   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:04.760862   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.338964   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:02.351381   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:02.351460   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:02.383206   80228 cri.go:89] found id: ""
	I0814 17:41:02.383235   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.383244   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:02.383250   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:02.383310   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:02.417016   80228 cri.go:89] found id: ""
	I0814 17:41:02.417042   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.417049   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:02.417055   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:02.417111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:02.451936   80228 cri.go:89] found id: ""
	I0814 17:41:02.451964   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.451974   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:02.451982   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:02.452042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:02.489896   80228 cri.go:89] found id: ""
	I0814 17:41:02.489927   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.489937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:02.489945   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:02.490011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:02.524273   80228 cri.go:89] found id: ""
	I0814 17:41:02.524308   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.524339   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:02.524346   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:02.524409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:02.558813   80228 cri.go:89] found id: ""
	I0814 17:41:02.558842   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.558850   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:02.558861   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:02.558917   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:02.592704   80228 cri.go:89] found id: ""
	I0814 17:41:02.592733   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.592747   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:02.592753   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:02.592818   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:02.625250   80228 cri.go:89] found id: ""
	I0814 17:41:02.625277   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.625288   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:02.625299   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:02.625312   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:02.677577   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:02.677613   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:02.691407   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:02.691439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:02.756797   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:02.756869   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:02.756888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:02.830803   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:02.830842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.370085   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:05.385272   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:05.385342   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:05.421775   80228 cri.go:89] found id: ""
	I0814 17:41:05.421799   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.421806   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:05.421812   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:05.421860   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:05.457054   80228 cri.go:89] found id: ""
	I0814 17:41:05.457083   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.457093   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:05.457100   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:05.457153   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:05.489290   80228 cri.go:89] found id: ""
	I0814 17:41:05.489330   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.489338   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:05.489345   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:05.489392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:05.527066   80228 cri.go:89] found id: ""
	I0814 17:41:05.527091   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.527098   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:05.527105   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:05.527155   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:05.563882   80228 cri.go:89] found id: ""
	I0814 17:41:05.563915   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.563925   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:05.563931   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:05.563982   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:05.601837   80228 cri.go:89] found id: ""
	I0814 17:41:05.601863   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.601871   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:05.601879   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:05.601940   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:05.633503   80228 cri.go:89] found id: ""
	I0814 17:41:05.633531   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.633539   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:05.633545   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:05.633615   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:05.668281   80228 cri.go:89] found id: ""
	I0814 17:41:05.668312   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.668324   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:05.668335   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:05.668349   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:05.747214   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:05.747249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.784408   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:05.784441   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:05.835067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:05.835103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:05.847938   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:05.847966   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:05.917404   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:03.513033   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:05.514476   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:06.944595   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:08.944850   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:07.260430   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:09.762513   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:08.417559   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:08.431092   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:08.431165   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:08.465357   80228 cri.go:89] found id: ""
	I0814 17:41:08.465515   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.465543   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:08.465560   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:08.465675   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:08.499085   80228 cri.go:89] found id: ""
	I0814 17:41:08.499114   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.499123   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:08.499129   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:08.499180   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:08.533881   80228 cri.go:89] found id: ""
	I0814 17:41:08.533909   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.533917   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:08.533922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:08.533972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:08.570503   80228 cri.go:89] found id: ""
	I0814 17:41:08.570549   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.570560   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:08.570572   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:08.570649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:08.602557   80228 cri.go:89] found id: ""
	I0814 17:41:08.602599   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.602610   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:08.602691   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:08.602785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:08.636174   80228 cri.go:89] found id: ""
	I0814 17:41:08.636199   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.636206   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:08.636213   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:08.636261   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:08.672774   80228 cri.go:89] found id: ""
	I0814 17:41:08.672804   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.672815   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:08.672823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:08.672890   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:08.705535   80228 cri.go:89] found id: ""
	I0814 17:41:08.705590   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.705605   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:08.705622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:08.705641   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:08.744315   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:08.744341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:08.794632   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:08.794666   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:08.808089   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:08.808117   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:08.876417   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:08.876436   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:08.876452   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:08.013688   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:10.512639   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:11.444206   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:13.944056   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:12.260065   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:14.759640   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:11.458562   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:11.470905   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:11.470965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:11.505992   80228 cri.go:89] found id: ""
	I0814 17:41:11.506023   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.506036   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:11.506044   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:11.506112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:11.540893   80228 cri.go:89] found id: ""
	I0814 17:41:11.540922   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.540932   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:11.540945   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:11.541001   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:11.575423   80228 cri.go:89] found id: ""
	I0814 17:41:11.575448   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.575455   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:11.575462   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:11.575520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:11.608126   80228 cri.go:89] found id: ""
	I0814 17:41:11.608155   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.608164   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:11.608171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:11.608222   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:11.640165   80228 cri.go:89] found id: ""
	I0814 17:41:11.640190   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.640198   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:11.640204   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:11.640263   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:11.674425   80228 cri.go:89] found id: ""
	I0814 17:41:11.674446   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.674455   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:11.674460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:11.674513   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:11.707448   80228 cri.go:89] found id: ""
	I0814 17:41:11.707477   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.707487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:11.707493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:11.707555   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:11.744309   80228 cri.go:89] found id: ""
	I0814 17:41:11.744338   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.744346   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:11.744363   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:11.744375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:11.824165   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:11.824196   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:11.862013   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:11.862039   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:11.913862   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:11.913902   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:11.927147   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:11.927178   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:11.998403   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.498590   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:14.512847   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:14.512938   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:14.549255   80228 cri.go:89] found id: ""
	I0814 17:41:14.549288   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.549306   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:14.549316   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:14.549382   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:14.588917   80228 cri.go:89] found id: ""
	I0814 17:41:14.588948   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.588956   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:14.588963   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:14.589012   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:14.622581   80228 cri.go:89] found id: ""
	I0814 17:41:14.622611   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.622621   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:14.622628   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:14.622693   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:14.656029   80228 cri.go:89] found id: ""
	I0814 17:41:14.656056   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.656064   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:14.656070   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:14.656117   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:14.687502   80228 cri.go:89] found id: ""
	I0814 17:41:14.687527   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.687536   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:14.687541   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:14.687614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:14.720682   80228 cri.go:89] found id: ""
	I0814 17:41:14.720713   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.720721   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:14.720728   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:14.720778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:14.752482   80228 cri.go:89] found id: ""
	I0814 17:41:14.752511   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.752520   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:14.752525   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:14.752577   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:14.792980   80228 cri.go:89] found id: ""
	I0814 17:41:14.793004   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.793014   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:14.793026   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:14.793042   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:14.845259   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:14.845297   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:14.858530   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:14.858556   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:14.931025   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.931054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:14.931067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:15.008081   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:15.008115   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:13.014174   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:15.512768   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:16.444772   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:16.444802   79521 pod_ready.go:81] duration metric: took 4m0.006448573s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	E0814 17:41:16.444810   79521 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 17:41:16.444817   79521 pod_ready.go:38] duration metric: took 4m5.044051569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:41:16.444832   79521 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:41:16.444858   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:16.444901   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:16.499710   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:16.499742   79521 cri.go:89] found id: ""
	I0814 17:41:16.499751   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:16.499815   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.504467   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:16.504544   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:16.546815   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:16.546842   79521 cri.go:89] found id: ""
	I0814 17:41:16.546851   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:16.546905   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.550917   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:16.550986   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:16.590195   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:16.590216   79521 cri.go:89] found id: ""
	I0814 17:41:16.590224   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:16.590267   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.594123   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:16.594196   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:16.631058   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:16.631091   79521 cri.go:89] found id: ""
	I0814 17:41:16.631101   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:16.631163   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.635151   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:16.635226   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:16.671555   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:16.671582   79521 cri.go:89] found id: ""
	I0814 17:41:16.671592   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:16.671657   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.675790   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:16.675847   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:16.713131   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:16.713157   79521 cri.go:89] found id: ""
	I0814 17:41:16.713165   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:16.713217   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.717296   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:16.717354   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:16.756212   79521 cri.go:89] found id: ""
	I0814 17:41:16.756245   79521 logs.go:276] 0 containers: []
	W0814 17:41:16.756255   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:16.756261   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:16.756324   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:16.802379   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:16.802411   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:16.802417   79521 cri.go:89] found id: ""
	I0814 17:41:16.802431   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:16.802492   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.807105   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.811210   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:16.811241   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:16.852490   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:16.852526   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:16.894384   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:16.894425   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:16.929919   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:16.929949   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:16.965031   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:16.965061   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:17.468878   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.468945   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.482799   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.482826   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:17.610874   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:17.610904   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:17.649292   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:17.649322   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:17.691014   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:17.691045   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:17.749218   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.749254   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.794240   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.794280   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.868805   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:17.868851   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:16.760328   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:18.760369   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:17.544873   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:17.557699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:17.557791   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:17.600314   80228 cri.go:89] found id: ""
	I0814 17:41:17.600347   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.600360   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:17.600370   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:17.600441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:17.634873   80228 cri.go:89] found id: ""
	I0814 17:41:17.634902   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.634914   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:17.634923   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:17.634986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:17.670521   80228 cri.go:89] found id: ""
	I0814 17:41:17.670552   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.670563   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:17.670571   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:17.670647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:17.705587   80228 cri.go:89] found id: ""
	I0814 17:41:17.705612   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.705626   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:17.705632   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:17.705682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:17.768178   80228 cri.go:89] found id: ""
	I0814 17:41:17.768207   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.768218   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:17.768226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:17.768290   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:17.804692   80228 cri.go:89] found id: ""
	I0814 17:41:17.804721   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.804729   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:17.804735   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:17.804795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:17.847994   80228 cri.go:89] found id: ""
	I0814 17:41:17.848030   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.848041   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:17.848052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:17.848122   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:17.883905   80228 cri.go:89] found id: ""
	I0814 17:41:17.883935   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.883944   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:17.883953   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.883965   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.931481   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.931522   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.983315   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.983363   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.996941   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.996981   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:18.067254   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:18.067279   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:18.067295   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:20.642099   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.655941   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.656014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.692525   80228 cri.go:89] found id: ""
	I0814 17:41:20.692554   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.692565   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:20.692577   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.692634   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.727721   80228 cri.go:89] found id: ""
	I0814 17:41:20.727755   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.727769   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:20.727778   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.727845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.770441   80228 cri.go:89] found id: ""
	I0814 17:41:20.770471   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.770481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:20.770488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.770550   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.807932   80228 cri.go:89] found id: ""
	I0814 17:41:20.807961   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.807968   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:20.807975   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.808030   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.849919   80228 cri.go:89] found id: ""
	I0814 17:41:20.849944   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.849963   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:20.849970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.850045   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.887351   80228 cri.go:89] found id: ""
	I0814 17:41:20.887382   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.887393   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:20.887401   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.887465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.921284   80228 cri.go:89] found id: ""
	I0814 17:41:20.921310   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.921321   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.921328   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:20.921409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:20.955238   80228 cri.go:89] found id: ""
	I0814 17:41:20.955267   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.955278   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:20.955288   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.955314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:21.024544   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:21.024565   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.024579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:21.103987   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.104019   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.145515   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.145550   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.197307   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.197346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.514682   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:20.015152   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:20.429364   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.445075   79521 api_server.go:72] duration metric: took 4m16.759338748s to wait for apiserver process to appear ...
	I0814 17:41:20.445102   79521 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:41:20.445133   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.445179   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.477630   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:20.477655   79521 cri.go:89] found id: ""
	I0814 17:41:20.477663   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:20.477714   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.481667   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.481728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.514443   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:20.514465   79521 cri.go:89] found id: ""
	I0814 17:41:20.514473   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:20.514516   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.518344   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.518401   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.559625   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:20.559647   79521 cri.go:89] found id: ""
	I0814 17:41:20.559653   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:20.559706   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.564137   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.564203   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.603504   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:20.603531   79521 cri.go:89] found id: ""
	I0814 17:41:20.603540   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:20.603602   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.608260   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.608334   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.641466   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:20.641487   79521 cri.go:89] found id: ""
	I0814 17:41:20.641494   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:20.641538   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.645566   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.645625   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.685003   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:20.685032   79521 cri.go:89] found id: ""
	I0814 17:41:20.685042   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:20.685104   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.690347   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.690429   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.733753   79521 cri.go:89] found id: ""
	I0814 17:41:20.733782   79521 logs.go:276] 0 containers: []
	W0814 17:41:20.733793   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.733800   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:20.733862   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:20.781659   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:20.781683   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:20.781689   79521 cri.go:89] found id: ""
	I0814 17:41:20.781697   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:20.781753   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.786293   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.790358   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.790377   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:20.916473   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:20.916513   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:20.968706   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:20.968743   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:21.003507   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:21.003546   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:21.049909   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:21.049961   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:21.090052   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:21.090080   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:21.129551   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.129585   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.174792   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.174828   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.247392   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.247440   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:21.261095   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:21.261129   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:21.306583   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:21.306616   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:21.339602   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:21.339642   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:21.397695   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.397732   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.301807   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:41:24.306392   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 200:
	ok
	I0814 17:41:24.307364   79521 api_server.go:141] control plane version: v1.31.0
	I0814 17:41:24.307390   79521 api_server.go:131] duration metric: took 3.862280551s to wait for apiserver health ...
	I0814 17:41:24.307398   79521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:41:24.307418   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:24.307463   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:24.342519   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:24.342552   79521 cri.go:89] found id: ""
	I0814 17:41:24.342561   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:24.342627   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.346361   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:24.346422   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:24.386973   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:24.387001   79521 cri.go:89] found id: ""
	I0814 17:41:24.387012   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:24.387066   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.390942   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:24.390999   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:24.426841   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:24.426863   79521 cri.go:89] found id: ""
	I0814 17:41:24.426872   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:24.426927   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.430856   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:24.430917   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:24.467024   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:24.467050   79521 cri.go:89] found id: ""
	I0814 17:41:24.467059   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:24.467117   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.471659   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:24.471728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:24.506759   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:24.506786   79521 cri.go:89] found id: ""
	I0814 17:41:24.506799   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:24.506857   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.511660   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:24.511728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:24.547768   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:24.547795   79521 cri.go:89] found id: ""
	I0814 17:41:24.547805   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:24.547862   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.552881   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:24.552941   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:24.588519   79521 cri.go:89] found id: ""
	I0814 17:41:24.588544   79521 logs.go:276] 0 containers: []
	W0814 17:41:24.588551   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:24.588557   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:24.588602   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:24.624604   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:24.624626   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:24.624630   79521 cri.go:89] found id: ""
	I0814 17:41:24.624636   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:24.624691   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.628703   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.632611   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:24.632636   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:24.671903   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:24.671935   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:24.709821   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.709851   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:25.107477   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:25.107515   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:25.221012   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:25.221041   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:20.760924   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:23.259780   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.260347   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:23.712584   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:23.726467   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:23.726545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:23.762871   80228 cri.go:89] found id: ""
	I0814 17:41:23.762906   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.762916   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:23.762922   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:23.762972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:23.800068   80228 cri.go:89] found id: ""
	I0814 17:41:23.800096   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.800105   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:23.800113   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:23.800173   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:23.834913   80228 cri.go:89] found id: ""
	I0814 17:41:23.834945   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.834956   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:23.834963   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:23.835022   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:23.871196   80228 cri.go:89] found id: ""
	I0814 17:41:23.871222   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.871233   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:23.871240   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:23.871294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:23.907830   80228 cri.go:89] found id: ""
	I0814 17:41:23.907854   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.907862   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:23.907868   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:23.907926   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:23.941110   80228 cri.go:89] found id: ""
	I0814 17:41:23.941133   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.941141   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:23.941146   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:23.941197   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:23.973602   80228 cri.go:89] found id: ""
	I0814 17:41:23.973631   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.973649   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:23.973655   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:23.973710   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:24.007398   80228 cri.go:89] found id: ""
	I0814 17:41:24.007436   80228 logs.go:276] 0 containers: []
	W0814 17:41:24.007450   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:24.007462   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:24.007478   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:24.061830   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:24.061867   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:24.075012   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:24.075046   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:24.148666   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:24.148692   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.148703   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.230208   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:24.230248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:22.513616   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.013383   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.272397   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:25.272429   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:25.317574   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:25.317603   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:25.352239   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:25.352271   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:25.409997   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:25.410030   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:25.443875   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:25.443899   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:25.490987   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:25.491023   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:25.563495   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:25.563531   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:25.577305   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:25.577345   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:28.147823   79521 system_pods.go:59] 8 kube-system pods found
	I0814 17:41:28.147855   79521 system_pods.go:61] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running
	I0814 17:41:28.147860   79521 system_pods.go:61] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running
	I0814 17:41:28.147864   79521 system_pods.go:61] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running
	I0814 17:41:28.147867   79521 system_pods.go:61] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running
	I0814 17:41:28.147870   79521 system_pods.go:61] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running
	I0814 17:41:28.147874   79521 system_pods.go:61] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running
	I0814 17:41:28.147879   79521 system_pods.go:61] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:41:28.147883   79521 system_pods.go:61] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running
	I0814 17:41:28.147890   79521 system_pods.go:74] duration metric: took 3.840486938s to wait for pod list to return data ...
	I0814 17:41:28.147898   79521 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:41:28.150377   79521 default_sa.go:45] found service account: "default"
	I0814 17:41:28.150398   79521 default_sa.go:55] duration metric: took 2.493777ms for default service account to be created ...
	I0814 17:41:28.150406   79521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:41:28.154470   79521 system_pods.go:86] 8 kube-system pods found
	I0814 17:41:28.154494   79521 system_pods.go:89] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running
	I0814 17:41:28.154500   79521 system_pods.go:89] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running
	I0814 17:41:28.154504   79521 system_pods.go:89] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running
	I0814 17:41:28.154510   79521 system_pods.go:89] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running
	I0814 17:41:28.154514   79521 system_pods.go:89] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running
	I0814 17:41:28.154519   79521 system_pods.go:89] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running
	I0814 17:41:28.154525   79521 system_pods.go:89] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:41:28.154530   79521 system_pods.go:89] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running
	I0814 17:41:28.154537   79521 system_pods.go:126] duration metric: took 4.125964ms to wait for k8s-apps to be running ...
	I0814 17:41:28.154544   79521 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:41:28.154585   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:28.170494   79521 system_svc.go:56] duration metric: took 15.940728ms WaitForService to wait for kubelet
	I0814 17:41:28.170524   79521 kubeadm.go:582] duration metric: took 4m24.484791018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:41:28.170545   79521 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:41:28.173368   79521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:41:28.173395   79521 node_conditions.go:123] node cpu capacity is 2
	I0814 17:41:28.173407   79521 node_conditions.go:105] duration metric: took 2.858344ms to run NodePressure ...
	I0814 17:41:28.173417   79521 start.go:241] waiting for startup goroutines ...
	I0814 17:41:28.173424   79521 start.go:246] waiting for cluster config update ...
	I0814 17:41:28.173435   79521 start.go:255] writing updated cluster config ...
	I0814 17:41:28.173730   79521 ssh_runner.go:195] Run: rm -f paused
	I0814 17:41:28.219460   79521 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:41:28.221461   79521 out.go:177] * Done! kubectl is now configured to use "embed-certs-309673" cluster and "default" namespace by default
	I0814 17:41:27.761580   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:30.260454   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:26.776204   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:26.789057   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:26.789132   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:26.822531   80228 cri.go:89] found id: ""
	I0814 17:41:26.822564   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.822575   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:26.822590   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:26.822651   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:26.855314   80228 cri.go:89] found id: ""
	I0814 17:41:26.855353   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.855365   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:26.855372   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:26.855434   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:26.889389   80228 cri.go:89] found id: ""
	I0814 17:41:26.889413   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.889421   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:26.889427   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:26.889485   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:26.925478   80228 cri.go:89] found id: ""
	I0814 17:41:26.925500   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.925508   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:26.925514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:26.925560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:26.957012   80228 cri.go:89] found id: ""
	I0814 17:41:26.957042   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.957053   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:26.957061   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:26.957114   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:26.989358   80228 cri.go:89] found id: ""
	I0814 17:41:26.989388   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.989399   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:26.989406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:26.989468   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:27.024761   80228 cri.go:89] found id: ""
	I0814 17:41:27.024786   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.024805   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:27.024830   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:27.024895   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:27.059172   80228 cri.go:89] found id: ""
	I0814 17:41:27.059204   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.059215   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:27.059226   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:27.059240   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.096123   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:27.096151   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:27.147689   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:27.147719   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:27.161454   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:27.161483   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:27.234644   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:27.234668   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:27.234680   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:29.817428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:29.831731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:29.831811   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:29.868531   80228 cri.go:89] found id: ""
	I0814 17:41:29.868567   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.868577   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:29.868585   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:29.868657   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:29.913578   80228 cri.go:89] found id: ""
	I0814 17:41:29.913602   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.913611   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:29.913617   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:29.913677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:29.963916   80228 cri.go:89] found id: ""
	I0814 17:41:29.963939   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.963946   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:29.963952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:29.964011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:30.016735   80228 cri.go:89] found id: ""
	I0814 17:41:30.016763   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.016773   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:30.016781   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:30.016841   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:30.048852   80228 cri.go:89] found id: ""
	I0814 17:41:30.048880   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.048890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:30.048898   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:30.048960   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:30.080291   80228 cri.go:89] found id: ""
	I0814 17:41:30.080324   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.080335   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:30.080343   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:30.080506   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:30.113876   80228 cri.go:89] found id: ""
	I0814 17:41:30.113904   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.113914   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:30.113921   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:30.113984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:30.147568   80228 cri.go:89] found id: ""
	I0814 17:41:30.147594   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.147604   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:30.147614   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:30.147627   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:30.197596   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:30.197630   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:30.210576   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:30.210602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:30.277711   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:30.277731   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:30.277746   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:30.356556   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:30.356590   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.013699   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:29.014020   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:31.512974   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:32.760328   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:35.254066   79871 pod_ready.go:81] duration metric: took 4m0.000392709s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" ...
	E0814 17:41:35.254095   79871 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 17:41:35.254112   79871 pod_ready.go:38] duration metric: took 4m12.044429915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:41:35.254137   79871 kubeadm.go:597] duration metric: took 4m20.041916203s to restartPrimaryControlPlane
	W0814 17:41:35.254189   79871 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:35.254218   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:32.892697   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:32.909435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:32.909497   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:32.945055   80228 cri.go:89] found id: ""
	I0814 17:41:32.945080   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.945088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:32.945094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:32.945150   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:32.979266   80228 cri.go:89] found id: ""
	I0814 17:41:32.979294   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.979305   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:32.979312   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:32.979398   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:33.014260   80228 cri.go:89] found id: ""
	I0814 17:41:33.014286   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.014294   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:33.014299   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:33.014351   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:33.047590   80228 cri.go:89] found id: ""
	I0814 17:41:33.047622   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.047633   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:33.047646   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:33.047711   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:33.081258   80228 cri.go:89] found id: ""
	I0814 17:41:33.081294   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.081328   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:33.081337   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:33.081403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:33.112209   80228 cri.go:89] found id: ""
	I0814 17:41:33.112237   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.112247   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:33.112254   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:33.112318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:33.143854   80228 cri.go:89] found id: ""
	I0814 17:41:33.143892   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.143904   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:33.143913   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:33.143977   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:33.175147   80228 cri.go:89] found id: ""
	I0814 17:41:33.175190   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.175201   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:33.175212   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:33.175226   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:33.212877   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:33.212908   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:33.268067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:33.268103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:33.281357   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:33.281386   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:33.350233   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:33.350257   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:33.350269   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:35.929498   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:35.942290   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:35.942354   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:35.975782   80228 cri.go:89] found id: ""
	I0814 17:41:35.975809   80228 logs.go:276] 0 containers: []
	W0814 17:41:35.975818   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:35.975826   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:35.975886   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:36.008165   80228 cri.go:89] found id: ""
	I0814 17:41:36.008191   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.008200   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:36.008206   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:36.008262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:36.044912   80228 cri.go:89] found id: ""
	I0814 17:41:36.044937   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.044945   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:36.044954   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:36.045002   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:36.078068   80228 cri.go:89] found id: ""
	I0814 17:41:36.078096   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.078108   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:36.078116   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:36.078179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:36.110429   80228 cri.go:89] found id: ""
	I0814 17:41:36.110456   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.110467   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:36.110480   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:36.110540   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:36.142086   80228 cri.go:89] found id: ""
	I0814 17:41:36.142111   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.142119   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:36.142125   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:36.142186   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:36.172738   80228 cri.go:89] found id: ""
	I0814 17:41:36.172761   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.172769   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:36.172775   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:36.172831   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:36.204345   80228 cri.go:89] found id: ""
	I0814 17:41:36.204368   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.204376   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:36.204388   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:36.204403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:36.216667   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:36.216689   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:36.279509   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:36.279528   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:36.279540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:33.513591   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:36.013400   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:36.360411   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:36.360447   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:36.398193   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:36.398230   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:38.952415   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:38.968484   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:38.968554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:39.002450   80228 cri.go:89] found id: ""
	I0814 17:41:39.002479   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.002486   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:39.002493   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:39.002551   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:39.035840   80228 cri.go:89] found id: ""
	I0814 17:41:39.035868   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.035876   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:39.035882   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:39.035934   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:39.069900   80228 cri.go:89] found id: ""
	I0814 17:41:39.069929   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.069940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:39.069946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:39.069999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:39.104657   80228 cri.go:89] found id: ""
	I0814 17:41:39.104681   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.104689   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:39.104695   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:39.104751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:39.137279   80228 cri.go:89] found id: ""
	I0814 17:41:39.137312   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.137322   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:39.137330   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:39.137403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:39.170377   80228 cri.go:89] found id: ""
	I0814 17:41:39.170414   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.170424   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:39.170430   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:39.170491   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:39.205742   80228 cri.go:89] found id: ""
	I0814 17:41:39.205779   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.205790   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:39.205796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:39.205850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:39.239954   80228 cri.go:89] found id: ""
	I0814 17:41:39.239979   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.239987   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:39.239994   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:39.240011   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:39.276587   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:39.276619   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:39.329286   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:39.329322   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:39.342232   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:39.342257   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:39.411043   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:39.411063   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:39.411075   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:38.013562   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:40.013740   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:41.994479   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:42.007736   80228 kubeadm.go:597] duration metric: took 4m4.488869114s to restartPrimaryControlPlane
	W0814 17:41:42.007822   80228 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:42.007871   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:42.513259   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:45.013455   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:46.541593   80228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.533697889s)
	I0814 17:41:46.541676   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:46.556181   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:41:46.565943   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:41:46.575481   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:41:46.575501   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:41:46.575549   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:41:46.585143   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:41:46.585202   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:41:46.595157   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:41:46.604539   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:41:46.604600   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:41:46.613345   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.622186   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:41:46.622242   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.631221   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:41:46.640649   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:41:46.640706   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:41:46.650161   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:41:46.724104   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:41:46.724182   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:41:46.860463   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:41:46.860606   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:41:46.860725   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:41:47.036697   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:41:47.038444   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:41:47.038561   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:41:47.038670   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:41:47.038775   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:41:47.038860   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:41:47.038973   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:41:47.039067   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:41:47.039172   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:41:47.039256   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:41:47.039359   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:41:47.039456   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:41:47.039516   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:41:47.039587   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:41:47.278696   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:41:47.664300   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:41:47.988137   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:41:48.076560   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:41:48.093447   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:41:48.094656   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:41:48.094793   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:41:48.253225   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:41:48.255034   80228 out.go:204]   - Booting up control plane ...
	I0814 17:41:48.255160   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:41:48.259041   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:41:48.260074   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:41:48.260862   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:41:48.262910   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:41:47.513415   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:50.012937   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:52.013499   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:54.514150   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:57.013146   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:59.013393   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:01.014185   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:01.441261   79871 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.187019598s)
	I0814 17:42:01.441333   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:01.457213   79871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:42:01.466802   79871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:42:01.475719   79871 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:42:01.475736   79871 kubeadm.go:157] found existing configuration files:
	
	I0814 17:42:01.475784   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 17:42:01.484555   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:42:01.484618   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:42:01.493956   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 17:42:01.503873   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:42:01.503923   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:42:01.514710   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 17:42:01.524473   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:42:01.524531   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:42:01.534749   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 17:42:01.544491   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:42:01.544558   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:42:01.555481   79871 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:42:01.599801   79871 kubeadm.go:310] W0814 17:42:01.575622    2598 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:01.600615   79871 kubeadm.go:310] W0814 17:42:01.576625    2598 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:01.703064   79871 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:42:03.513007   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:05.514241   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:09.627141   79871 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:42:09.627216   79871 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:42:09.627344   79871 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:42:09.627480   79871 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:42:09.627638   79871 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:42:09.627717   79871 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:42:09.629272   79871 out.go:204]   - Generating certificates and keys ...
	I0814 17:42:09.629370   79871 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:42:09.629472   79871 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:42:09.629592   79871 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:42:09.629712   79871 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:42:09.629780   79871 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:42:09.629826   79871 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:42:09.629898   79871 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:42:09.629963   79871 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:42:09.630076   79871 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:42:09.630198   79871 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:42:09.630253   79871 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:42:09.630314   79871 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:42:09.630357   79871 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:42:09.630412   79871 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:42:09.630457   79871 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:42:09.630509   79871 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:42:09.630560   79871 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:42:09.630629   79871 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:42:09.630688   79871 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:42:09.632664   79871 out.go:204]   - Booting up control plane ...
	I0814 17:42:09.632763   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:42:09.632878   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:42:09.632963   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:42:09.633100   79871 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:42:09.633207   79871 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:42:09.633252   79871 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:42:09.633412   79871 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:42:09.633542   79871 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:42:09.633624   79871 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004125702s
	I0814 17:42:09.633727   79871 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:42:09.633814   79871 kubeadm.go:310] [api-check] The API server is healthy after 4.501648596s
	I0814 17:42:09.633967   79871 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:42:09.634119   79871 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:42:09.634169   79871 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:42:09.634328   79871 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-885666 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:42:09.634400   79871 kubeadm.go:310] [bootstrap-token] Using token: 17ct2j.hazurgskaspe26qx
	I0814 17:42:09.635732   79871 out.go:204]   - Configuring RBAC rules ...
	I0814 17:42:09.635859   79871 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:42:09.635990   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:42:09.636141   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:42:09.636250   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:42:09.636347   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:42:09.636485   79871 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:42:09.636657   79871 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:42:09.636708   79871 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:42:09.636747   79871 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:42:09.636753   79871 kubeadm.go:310] 
	I0814 17:42:09.636813   79871 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:42:09.636835   79871 kubeadm.go:310] 
	I0814 17:42:09.636972   79871 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:42:09.636995   79871 kubeadm.go:310] 
	I0814 17:42:09.637029   79871 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:42:09.637120   79871 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:42:09.637185   79871 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:42:09.637195   79871 kubeadm.go:310] 
	I0814 17:42:09.637267   79871 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:42:09.637277   79871 kubeadm.go:310] 
	I0814 17:42:09.637315   79871 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:42:09.637321   79871 kubeadm.go:310] 
	I0814 17:42:09.637384   79871 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:42:09.637461   79871 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:42:09.637523   79871 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:42:09.637529   79871 kubeadm.go:310] 
	I0814 17:42:09.637623   79871 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:42:09.637691   79871 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:42:09.637703   79871 kubeadm.go:310] 
	I0814 17:42:09.637779   79871 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 17ct2j.hazurgskaspe26qx \
	I0814 17:42:09.637866   79871 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:42:09.637890   79871 kubeadm.go:310] 	--control-plane 
	I0814 17:42:09.637899   79871 kubeadm.go:310] 
	I0814 17:42:09.638010   79871 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:42:09.638020   79871 kubeadm.go:310] 
	I0814 17:42:09.638098   79871 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 17ct2j.hazurgskaspe26qx \
	I0814 17:42:09.638211   79871 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:42:09.638234   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:42:09.638246   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:42:09.639748   79871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:42:09.641031   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:42:09.652173   79871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:42:09.670482   79871 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:42:09.670582   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:09.670582   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-885666 minikube.k8s.io/updated_at=2024_08_14T17_42_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=default-k8s-diff-port-885666 minikube.k8s.io/primary=true
	I0814 17:42:09.703097   79871 ops.go:34] apiserver oom_adj: -16
	I0814 17:42:09.881340   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:10.381470   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:07.516539   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:10.015456   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:10.882013   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:11.382239   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:11.881638   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:12.381703   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:12.881401   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:13.381524   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:13.881402   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:14.381696   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:14.498441   79871 kubeadm.go:1113] duration metric: took 4.827929439s to wait for elevateKubeSystemPrivileges
	I0814 17:42:14.498474   79871 kubeadm.go:394] duration metric: took 4m59.336328921s to StartCluster
	I0814 17:42:14.498493   79871 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:42:14.498581   79871 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:42:14.501029   79871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:42:14.501309   79871 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:42:14.501432   79871 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:42:14.501508   79871 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501541   79871 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.501550   79871 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:42:14.501577   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.501590   79871 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:42:14.501619   79871 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501667   79871 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.501677   79871 addons.go:243] addon metrics-server should already be in state true
	I0814 17:42:14.501716   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.501593   79871 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501840   79871 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-885666"
	I0814 17:42:14.502106   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502142   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502160   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502174   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502176   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502199   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502371   79871 out.go:177] * Verifying Kubernetes components...
	I0814 17:42:14.504085   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:42:14.519401   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0814 17:42:14.519631   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0814 17:42:14.520085   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.520196   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.520701   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.520722   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.520789   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0814 17:42:14.520978   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.520994   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.521255   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.521519   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.521524   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.521703   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.522021   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.522051   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.522548   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.522572   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.522864   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.523507   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.523550   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.525737   79871 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.525759   79871 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:42:14.525789   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.526144   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.526170   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.538930   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0814 17:42:14.538995   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42207
	I0814 17:42:14.539567   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.539594   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.540125   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.540138   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.540266   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.540289   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.540624   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.540770   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.540825   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.540970   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.542540   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.542848   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.544439   79871 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:42:14.544444   79871 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:42:14.544881   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0814 17:42:14.545315   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.545575   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:42:14.545586   79871 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:42:14.545601   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.545672   79871 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:42:14.545691   79871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:42:14.545708   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.545750   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.545759   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.546339   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.547167   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.547290   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.549794   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.549829   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550300   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.550324   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.550423   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550637   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.550707   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.550965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.551025   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.551119   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.551168   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.551302   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.551658   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.567203   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37661
	I0814 17:42:14.567613   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.568141   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.568167   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.568484   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.568678   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.570337   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.570867   79871 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:42:14.570888   79871 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:42:14.570906   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.574091   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.574562   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.574586   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.574667   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.574857   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.575039   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.575197   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.675594   79871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:42:14.694520   79871 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-885666" to be "Ready" ...
	I0814 17:42:14.702650   79871 node_ready.go:49] node "default-k8s-diff-port-885666" has status "Ready":"True"
	I0814 17:42:14.702672   79871 node_ready.go:38] duration metric: took 8.119351ms for node "default-k8s-diff-port-885666" to be "Ready" ...
	I0814 17:42:14.702684   79871 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:14.707535   79871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:14.762686   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:42:14.805275   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:42:14.837118   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:42:14.837143   79871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:42:14.881848   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:42:14.881872   79871 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:42:14.902731   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.902762   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.903058   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.903076   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.903092   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.903111   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.903448   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:14.903484   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.903493   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.908980   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.908995   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.909239   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:14.909310   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.909336   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.920224   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:42:14.920249   79871 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:42:14.955256   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:42:15.297167   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.297190   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.297544   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:15.297602   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.297631   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.297649   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.297659   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.297865   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.297885   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.842348   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.842376   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.842688   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.842707   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.842716   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.842738   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:15.842805   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.843057   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.843070   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.843081   79871 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-885666"
	I0814 17:42:15.844747   79871 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0814 17:42:12.513055   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:14.514298   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:15.845895   79871 addons.go:510] duration metric: took 1.344461878s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0814 17:42:16.714014   79871 pod_ready.go:102] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:18.715243   79871 pod_ready.go:102] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:17.013231   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:19.013966   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:20.507978   79367 pod_ready.go:81] duration metric: took 4m0.001138158s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" ...
	E0814 17:42:20.508026   79367 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 17:42:20.508048   79367 pod_ready.go:38] duration metric: took 4m6.305785273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:20.508081   79367 kubeadm.go:597] duration metric: took 4m13.455842043s to restartPrimaryControlPlane
	W0814 17:42:20.508145   79367 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:42:20.508186   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:42:20.714660   79871 pod_ready.go:92] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.714687   79871 pod_ready.go:81] duration metric: took 6.007129076s for pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.714696   79871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.719517   79871 pod_ready.go:92] pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.719542   79871 pod_ready.go:81] duration metric: took 4.838754ms for pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.719554   79871 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.724787   79871 pod_ready.go:92] pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.724816   79871 pod_ready.go:81] duration metric: took 5.250194ms for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.724834   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.731431   79871 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.731456   79871 pod_ready.go:81] duration metric: took 1.00661383s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.731468   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.736442   79871 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.736467   79871 pod_ready.go:81] duration metric: took 4.989787ms for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.736480   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-254cb" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.911495   79871 pod_ready.go:92] pod "kube-proxy-254cb" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.911520   79871 pod_ready.go:81] duration metric: took 175.03218ms for pod "kube-proxy-254cb" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.911529   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:22.311700   79871 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:22.311730   79871 pod_ready.go:81] duration metric: took 400.194781ms for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:22.311739   79871 pod_ready.go:38] duration metric: took 7.609043377s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:22.311752   79871 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:42:22.311800   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:42:22.326995   79871 api_server.go:72] duration metric: took 7.825649112s to wait for apiserver process to appear ...
	I0814 17:42:22.327018   79871 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:42:22.327036   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:42:22.331069   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 200:
	ok
	I0814 17:42:22.332077   79871 api_server.go:141] control plane version: v1.31.0
	I0814 17:42:22.332096   79871 api_server.go:131] duration metric: took 5.0724ms to wait for apiserver health ...
	I0814 17:42:22.332103   79871 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:42:22.514565   79871 system_pods.go:59] 9 kube-system pods found
	I0814 17:42:22.514595   79871 system_pods.go:61] "coredns-6f6b679f8f-k5qnj" [cf05f7e2-29de-4437-b182-53cd65350631] Running
	I0814 17:42:22.514601   79871 system_pods.go:61] "coredns-6f6b679f8f-nm28w" [ba1fe4d0-1869-49ec-a281-18119a2ad26b] Running
	I0814 17:42:22.514606   79871 system_pods.go:61] "etcd-default-k8s-diff-port-885666" [62581194-9ace-41f9-ba0d-0df04b7dca41] Running
	I0814 17:42:22.514610   79871 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-885666" [ea586a7b-5ca4-48d7-8be3-c13770e0cb40] Running
	I0814 17:42:22.514614   79871 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-885666" [9610bcca-feef-45f2-8b36-a6e96d364e18] Running
	I0814 17:42:22.514617   79871 system_pods.go:61] "kube-proxy-254cb" [e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb] Running
	I0814 17:42:22.514620   79871 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-885666" [872997ac-b438-4be5-b187-af171228770c] Running
	I0814 17:42:22.514626   79871 system_pods.go:61] "metrics-server-6867b74b74-5q86r" [849df692-9f8e-455e-b209-25801151513b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:42:22.514631   79871 system_pods.go:61] "storage-provisioner" [5128eea6-234c-4aea-a9b7-835e840a31a3] Running
	I0814 17:42:22.514639   79871 system_pods.go:74] duration metric: took 182.531543ms to wait for pod list to return data ...
	I0814 17:42:22.514647   79871 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:42:22.713101   79871 default_sa.go:45] found service account: "default"
	I0814 17:42:22.713125   79871 default_sa.go:55] duration metric: took 198.471814ms for default service account to be created ...
	I0814 17:42:22.713136   79871 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:42:22.914576   79871 system_pods.go:86] 9 kube-system pods found
	I0814 17:42:22.914619   79871 system_pods.go:89] "coredns-6f6b679f8f-k5qnj" [cf05f7e2-29de-4437-b182-53cd65350631] Running
	I0814 17:42:22.914628   79871 system_pods.go:89] "coredns-6f6b679f8f-nm28w" [ba1fe4d0-1869-49ec-a281-18119a2ad26b] Running
	I0814 17:42:22.914635   79871 system_pods.go:89] "etcd-default-k8s-diff-port-885666" [62581194-9ace-41f9-ba0d-0df04b7dca41] Running
	I0814 17:42:22.914643   79871 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-885666" [ea586a7b-5ca4-48d7-8be3-c13770e0cb40] Running
	I0814 17:42:22.914650   79871 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-885666" [9610bcca-feef-45f2-8b36-a6e96d364e18] Running
	I0814 17:42:22.914657   79871 system_pods.go:89] "kube-proxy-254cb" [e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb] Running
	I0814 17:42:22.914665   79871 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-885666" [872997ac-b438-4be5-b187-af171228770c] Running
	I0814 17:42:22.914678   79871 system_pods.go:89] "metrics-server-6867b74b74-5q86r" [849df692-9f8e-455e-b209-25801151513b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:42:22.914689   79871 system_pods.go:89] "storage-provisioner" [5128eea6-234c-4aea-a9b7-835e840a31a3] Running
	I0814 17:42:22.914705   79871 system_pods.go:126] duration metric: took 201.563199ms to wait for k8s-apps to be running ...
	I0814 17:42:22.914716   79871 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:42:22.914768   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:22.928499   79871 system_svc.go:56] duration metric: took 13.774119ms WaitForService to wait for kubelet
	I0814 17:42:22.928525   79871 kubeadm.go:582] duration metric: took 8.427183796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:42:22.928543   79871 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:42:23.112363   79871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:42:23.112398   79871 node_conditions.go:123] node cpu capacity is 2
	I0814 17:42:23.112410   79871 node_conditions.go:105] duration metric: took 183.861382ms to run NodePressure ...
	I0814 17:42:23.112423   79871 start.go:241] waiting for startup goroutines ...
	I0814 17:42:23.112432   79871 start.go:246] waiting for cluster config update ...
	I0814 17:42:23.112446   79871 start.go:255] writing updated cluster config ...
	I0814 17:42:23.112792   79871 ssh_runner.go:195] Run: rm -f paused
	I0814 17:42:23.162700   79871 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:42:23.164689   79871 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-885666" cluster and "default" namespace by default
	I0814 17:42:28.263217   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:42:28.263629   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:28.263853   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:33.264169   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:33.264403   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:43.264648   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:43.264858   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:46.859569   79367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.351355314s)
	I0814 17:42:46.859653   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:46.875530   79367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:42:46.884772   79367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:42:46.894185   79367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:42:46.894208   79367 kubeadm.go:157] found existing configuration files:
	
	I0814 17:42:46.894258   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:42:46.903690   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:42:46.903748   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:42:46.913277   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:42:46.922120   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:42:46.922173   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:42:46.931143   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:42:46.939936   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:42:46.939997   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:42:46.949257   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:42:46.958109   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:42:46.958169   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:42:46.967609   79367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:42:47.010119   79367 kubeadm.go:310] W0814 17:42:46.983769    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:47.010889   79367 kubeadm.go:310] W0814 17:42:46.984558    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:47.122746   79367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:42:55.571963   79367 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:42:55.572017   79367 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:42:55.572127   79367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:42:55.572236   79367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:42:55.572314   79367 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:42:55.572385   79367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:42:55.574178   79367 out.go:204]   - Generating certificates and keys ...
	I0814 17:42:55.574288   79367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:42:55.574372   79367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:42:55.574485   79367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:42:55.574573   79367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:42:55.574669   79367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:42:55.574740   79367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:42:55.574811   79367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:42:55.574909   79367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:42:55.575014   79367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:42:55.575135   79367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:42:55.575187   79367 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:42:55.575238   79367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:42:55.575288   79367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:42:55.575359   79367 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:42:55.575438   79367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:42:55.575521   79367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:42:55.575608   79367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:42:55.575759   79367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:42:55.575869   79367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:42:55.577331   79367 out.go:204]   - Booting up control plane ...
	I0814 17:42:55.577429   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:42:55.577511   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:42:55.577587   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:42:55.577773   79367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:42:55.577908   79367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:42:55.577968   79367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:42:55.578152   79367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:42:55.578301   79367 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:42:55.578368   79367 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.938552ms
	I0814 17:42:55.578428   79367 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:42:55.578480   79367 kubeadm.go:310] [api-check] The API server is healthy after 5.00239154s
	I0814 17:42:55.578605   79367 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:42:55.578777   79367 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:42:55.578863   79367 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:42:55.579025   79367 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-545149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:42:55.579100   79367 kubeadm.go:310] [bootstrap-token] Using token: qzd0yh.k8a8j7f6vmqndeav
	I0814 17:42:55.580318   79367 out.go:204]   - Configuring RBAC rules ...
	I0814 17:42:55.580429   79367 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:42:55.580503   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:42:55.580683   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:42:55.580839   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:42:55.580935   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:42:55.581047   79367 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:42:55.581197   79367 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:42:55.581235   79367 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:42:55.581279   79367 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:42:55.581285   79367 kubeadm.go:310] 
	I0814 17:42:55.581339   79367 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:42:55.581355   79367 kubeadm.go:310] 
	I0814 17:42:55.581470   79367 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:42:55.581480   79367 kubeadm.go:310] 
	I0814 17:42:55.581519   79367 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:42:55.581586   79367 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:42:55.581654   79367 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:42:55.581663   79367 kubeadm.go:310] 
	I0814 17:42:55.581749   79367 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:42:55.581757   79367 kubeadm.go:310] 
	I0814 17:42:55.581798   79367 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:42:55.581804   79367 kubeadm.go:310] 
	I0814 17:42:55.581857   79367 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:42:55.581944   79367 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:42:55.582007   79367 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:42:55.582014   79367 kubeadm.go:310] 
	I0814 17:42:55.582085   79367 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:42:55.582148   79367 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:42:55.582154   79367 kubeadm.go:310] 
	I0814 17:42:55.582221   79367 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qzd0yh.k8a8j7f6vmqndeav \
	I0814 17:42:55.582313   79367 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:42:55.582333   79367 kubeadm.go:310] 	--control-plane 
	I0814 17:42:55.582336   79367 kubeadm.go:310] 
	I0814 17:42:55.582426   79367 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:42:55.582434   79367 kubeadm.go:310] 
	I0814 17:42:55.582518   79367 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qzd0yh.k8a8j7f6vmqndeav \
	I0814 17:42:55.582678   79367 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:42:55.582691   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:42:55.582697   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:42:55.584337   79367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:42:55.585493   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:42:55.596201   79367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:42:55.617012   79367 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:42:55.617115   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:55.617152   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-545149 minikube.k8s.io/updated_at=2024_08_14T17_42_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=no-preload-545149 minikube.k8s.io/primary=true
	I0814 17:42:55.794262   79367 ops.go:34] apiserver oom_adj: -16
	I0814 17:42:55.794421   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:56.294450   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:56.795280   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:57.294604   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:57.794700   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:58.294863   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:58.795404   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:59.295066   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:59.794529   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:43:00.294720   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:43:00.409254   79367 kubeadm.go:1113] duration metric: took 4.79220609s to wait for elevateKubeSystemPrivileges
	I0814 17:43:00.409300   79367 kubeadm.go:394] duration metric: took 4m53.401266889s to StartCluster
	I0814 17:43:00.409323   79367 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:43:00.409419   79367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:43:00.411076   79367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:43:00.411313   79367 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:43:00.411438   79367 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:43:00.411521   79367 addons.go:69] Setting storage-provisioner=true in profile "no-preload-545149"
	I0814 17:43:00.411529   79367 addons.go:69] Setting default-storageclass=true in profile "no-preload-545149"
	I0814 17:43:00.411552   79367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-545149"
	I0814 17:43:00.411554   79367 addons.go:234] Setting addon storage-provisioner=true in "no-preload-545149"
	I0814 17:43:00.411564   79367 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:43:00.411568   79367 addons.go:69] Setting metrics-server=true in profile "no-preload-545149"
	W0814 17:43:00.411566   79367 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:43:00.411601   79367 addons.go:234] Setting addon metrics-server=true in "no-preload-545149"
	W0814 17:43:00.411612   79367 addons.go:243] addon metrics-server should already be in state true
	I0814 17:43:00.411637   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.411646   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.411922   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.411954   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412019   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.412053   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412076   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.412103   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412914   79367 out.go:177] * Verifying Kubernetes components...
	I0814 17:43:00.414471   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:43:00.427965   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0814 17:43:00.427966   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41043
	I0814 17:43:00.428460   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.428608   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.428985   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.429004   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.429130   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.429147   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.429206   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0814 17:43:00.429346   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.429443   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.429498   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.429543   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.430131   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.430152   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.430435   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.430446   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.430718   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.431238   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.431270   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.433273   79367 addons.go:234] Setting addon default-storageclass=true in "no-preload-545149"
	W0814 17:43:00.433292   79367 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:43:00.433319   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.433551   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.433581   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.450138   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I0814 17:43:00.450327   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I0814 17:43:00.450697   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.450818   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.451527   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.451547   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.451695   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.451706   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.451958   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.452224   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.452283   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.453110   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.453141   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.453937   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.455467   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I0814 17:43:00.455825   79367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:43:00.455934   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.456456   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.456479   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.456964   79367 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:43:00.456981   79367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:43:00.456999   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.457000   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.457144   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.459264   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.460208   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.460606   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.460636   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.460750   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.460858   79367 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:43:00.460989   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.461150   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.461281   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.462118   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:43:00.462138   79367 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:43:00.462156   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.465200   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.465643   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.465710   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.465829   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.466004   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.466165   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.466312   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.478054   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0814 17:43:00.478616   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.479176   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.479198   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.479536   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.479725   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.481351   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.481574   79367 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:43:00.481588   79367 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:43:00.481606   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.484454   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.484738   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.484771   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.484989   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.485222   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.485370   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.485485   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.617562   79367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:43:00.665134   79367 node_ready.go:35] waiting up to 6m0s for node "no-preload-545149" to be "Ready" ...
	I0814 17:43:00.673659   79367 node_ready.go:49] node "no-preload-545149" has status "Ready":"True"
	I0814 17:43:00.673680   79367 node_ready.go:38] duration metric: took 8.508683ms for node "no-preload-545149" to be "Ready" ...
	I0814 17:43:00.673689   79367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:43:00.680313   79367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:00.810401   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:43:00.827621   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:43:00.871727   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:43:00.871752   79367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:43:00.969061   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:43:00.969088   79367 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:43:01.103808   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:43:01.103839   79367 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:43:01.198160   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:43:01.880623   79367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.052957744s)
	I0814 17:43:01.880683   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.880697   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.880749   79367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070305568s)
	I0814 17:43:01.880785   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.880804   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881075   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881093   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881103   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.881115   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881248   79367 main.go:141] libmachine: (no-preload-545149) DBG | Closing plugin on server side
	I0814 17:43:01.881284   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881312   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881336   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.881375   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881385   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881396   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881682   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881703   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.896050   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.896076   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.896351   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.896370   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.131404   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:02.131427   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:02.131744   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:02.131768   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.131780   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:02.131788   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:02.132004   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:02.132026   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.132042   79367 addons.go:475] Verifying addon metrics-server=true in "no-preload-545149"
	I0814 17:43:02.133699   79367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 17:43:03.265508   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:03.265720   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:02.135365   79367 addons.go:510] duration metric: took 1.72392081s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 17:43:02.687160   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:05.186062   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:07.187193   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:09.188957   79367 pod_ready.go:92] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.188990   79367 pod_ready.go:81] duration metric: took 8.508650006s for pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.189003   79367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.194469   79367 pod_ready.go:92] pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.194492   79367 pod_ready.go:81] duration metric: took 5.48133ms for pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.194501   79367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.199127   79367 pod_ready.go:92] pod "etcd-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.199150   79367 pod_ready.go:81] duration metric: took 4.643296ms for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.199159   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.203804   79367 pod_ready.go:92] pod "kube-apiserver-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.203825   79367 pod_ready.go:81] duration metric: took 4.659513ms for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.203837   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.208443   79367 pod_ready.go:92] pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.208465   79367 pod_ready.go:81] duration metric: took 4.620634ms for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.208479   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6bps" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.584443   79367 pod_ready.go:92] pod "kube-proxy-s6bps" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.584471   79367 pod_ready.go:81] duration metric: took 375.985094ms for pod "kube-proxy-s6bps" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.584481   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.985476   79367 pod_ready.go:92] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.985504   79367 pod_ready.go:81] duration metric: took 401.014791ms for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.985515   79367 pod_ready.go:38] duration metric: took 9.311816641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:43:09.985534   79367 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:43:09.985603   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:43:10.002239   79367 api_server.go:72] duration metric: took 9.590875358s to wait for apiserver process to appear ...
	I0814 17:43:10.002276   79367 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:43:10.002304   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:43:10.009410   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I0814 17:43:10.010351   79367 api_server.go:141] control plane version: v1.31.0
	I0814 17:43:10.010381   79367 api_server.go:131] duration metric: took 8.098086ms to wait for apiserver health ...
	I0814 17:43:10.010389   79367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:43:10.189597   79367 system_pods.go:59] 9 kube-system pods found
	I0814 17:43:10.189629   79367 system_pods.go:61] "coredns-6f6b679f8f-h4dmc" [33f2fdca-15ba-430f-989f-3c569f33a76a] Running
	I0814 17:43:10.189634   79367 system_pods.go:61] "coredns-6f6b679f8f-mpfqf" [7b0e3bf4-41d9-4151-8255-37881e596c20] Running
	I0814 17:43:10.189638   79367 system_pods.go:61] "etcd-no-preload-545149" [5fc2782c-a4c3-46d6-b2d2-3c9325f17ae4] Running
	I0814 17:43:10.189642   79367 system_pods.go:61] "kube-apiserver-no-preload-545149" [3cdde3b9-70b4-4e5e-bc48-ab207c903437] Running
	I0814 17:43:10.189646   79367 system_pods.go:61] "kube-controller-manager-no-preload-545149" [c8f222c9-95a1-4acf-9ca3-068832ed808f] Running
	I0814 17:43:10.189649   79367 system_pods.go:61] "kube-proxy-s6bps" [9165c654-568f-4206-878c-f0c88ccd38cd] Running
	I0814 17:43:10.189652   79367 system_pods.go:61] "kube-scheduler-no-preload-545149" [423d82b6-cb92-408b-a5d6-95305c91400c] Running
	I0814 17:43:10.189658   79367 system_pods.go:61] "metrics-server-6867b74b74-7qljd" [0f0e5d07-eb28-46b3-9270-554006151eda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:43:10.189662   79367 system_pods.go:61] "storage-provisioner" [bc80ba99-eecf-4eb1-bd78-f88792cb3e5a] Running
	I0814 17:43:10.189670   79367 system_pods.go:74] duration metric: took 179.275641ms to wait for pod list to return data ...
	I0814 17:43:10.189678   79367 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:43:10.385690   79367 default_sa.go:45] found service account: "default"
	I0814 17:43:10.385715   79367 default_sa.go:55] duration metric: took 196.030333ms for default service account to be created ...
	I0814 17:43:10.385723   79367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:43:10.590237   79367 system_pods.go:86] 9 kube-system pods found
	I0814 17:43:10.590272   79367 system_pods.go:89] "coredns-6f6b679f8f-h4dmc" [33f2fdca-15ba-430f-989f-3c569f33a76a] Running
	I0814 17:43:10.590279   79367 system_pods.go:89] "coredns-6f6b679f8f-mpfqf" [7b0e3bf4-41d9-4151-8255-37881e596c20] Running
	I0814 17:43:10.590285   79367 system_pods.go:89] "etcd-no-preload-545149" [5fc2782c-a4c3-46d6-b2d2-3c9325f17ae4] Running
	I0814 17:43:10.590291   79367 system_pods.go:89] "kube-apiserver-no-preload-545149" [3cdde3b9-70b4-4e5e-bc48-ab207c903437] Running
	I0814 17:43:10.590299   79367 system_pods.go:89] "kube-controller-manager-no-preload-545149" [c8f222c9-95a1-4acf-9ca3-068832ed808f] Running
	I0814 17:43:10.590306   79367 system_pods.go:89] "kube-proxy-s6bps" [9165c654-568f-4206-878c-f0c88ccd38cd] Running
	I0814 17:43:10.590312   79367 system_pods.go:89] "kube-scheduler-no-preload-545149" [423d82b6-cb92-408b-a5d6-95305c91400c] Running
	I0814 17:43:10.590322   79367 system_pods.go:89] "metrics-server-6867b74b74-7qljd" [0f0e5d07-eb28-46b3-9270-554006151eda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:43:10.590335   79367 system_pods.go:89] "storage-provisioner" [bc80ba99-eecf-4eb1-bd78-f88792cb3e5a] Running
	I0814 17:43:10.590351   79367 system_pods.go:126] duration metric: took 204.620982ms to wait for k8s-apps to be running ...
	I0814 17:43:10.590364   79367 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:43:10.590418   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:10.605594   79367 system_svc.go:56] duration metric: took 15.223089ms WaitForService to wait for kubelet
	I0814 17:43:10.605624   79367 kubeadm.go:582] duration metric: took 10.194267533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:43:10.605644   79367 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:43:10.786127   79367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:43:10.786160   79367 node_conditions.go:123] node cpu capacity is 2
	I0814 17:43:10.786173   79367 node_conditions.go:105] duration metric: took 180.522994ms to run NodePressure ...
	I0814 17:43:10.786187   79367 start.go:241] waiting for startup goroutines ...
	I0814 17:43:10.786197   79367 start.go:246] waiting for cluster config update ...
	I0814 17:43:10.786210   79367 start.go:255] writing updated cluster config ...
	I0814 17:43:10.786498   79367 ssh_runner.go:195] Run: rm -f paused
	I0814 17:43:10.834139   79367 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:43:10.836315   79367 out.go:177] * Done! kubectl is now configured to use "no-preload-545149" cluster and "default" namespace by default
	I0814 17:43:43.267316   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:43.267596   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:43.267623   80228 kubeadm.go:310] 
	I0814 17:43:43.267680   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:43:43.267757   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:43:43.267778   80228 kubeadm.go:310] 
	I0814 17:43:43.267839   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:43:43.267894   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:43:43.268029   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:43:43.268044   80228 kubeadm.go:310] 
	I0814 17:43:43.268190   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:43:43.268247   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:43:43.268296   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:43:43.268305   80228 kubeadm.go:310] 
	I0814 17:43:43.268446   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:43:43.268561   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:43:43.268572   80228 kubeadm.go:310] 
	I0814 17:43:43.268741   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:43:43.268907   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:43:43.269021   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:43:43.269120   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:43:43.269131   80228 kubeadm.go:310] 
	I0814 17:43:43.269560   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:43:43.269642   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:43:43.269698   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 17:43:43.269809   80228 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 17:43:43.269853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:43:43.733975   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:43.748632   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:43:43.758474   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:43:43.758493   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:43:43.758543   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:43:43.767721   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:43:43.767777   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:43:43.777259   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:43:43.786562   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:43:43.786623   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:43:43.795253   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.803677   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:43:43.803733   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.812416   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:43:43.821020   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:43:43.821075   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:43:43.829709   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:43:44.024836   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:45:40.060126   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:45:40.060266   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 17:45:40.061931   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:45:40.062003   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:45:40.062110   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:45:40.062231   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:45:40.062360   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:45:40.062459   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:45:40.063940   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:45:40.064041   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:45:40.064124   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:45:40.064230   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:45:40.064305   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:45:40.064398   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:45:40.064471   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:45:40.064550   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:45:40.064632   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:45:40.064712   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:45:40.064798   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:45:40.064857   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:45:40.064913   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:45:40.064956   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:45:40.065040   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:45:40.065146   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:45:40.065222   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:45:40.065366   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:45:40.065490   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:45:40.065547   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:45:40.065648   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:45:40.068108   80228 out.go:204]   - Booting up control plane ...
	I0814 17:45:40.068211   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:45:40.068294   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:45:40.068395   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:45:40.068522   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:45:40.068675   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:45:40.068751   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:45:40.068843   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069048   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069141   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069393   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069510   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069756   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069823   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069982   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070051   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.070204   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070211   80228 kubeadm.go:310] 
	I0814 17:45:40.070244   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:45:40.070291   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:45:40.070299   80228 kubeadm.go:310] 
	I0814 17:45:40.070337   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:45:40.070379   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:45:40.070504   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:45:40.070521   80228 kubeadm.go:310] 
	I0814 17:45:40.070673   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:45:40.070723   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:45:40.070764   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:45:40.070776   80228 kubeadm.go:310] 
	I0814 17:45:40.070876   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:45:40.070945   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:45:40.070953   80228 kubeadm.go:310] 
	I0814 17:45:40.071045   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:45:40.071151   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:45:40.071246   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:45:40.071363   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:45:40.071453   80228 kubeadm.go:310] 
	I0814 17:45:40.071481   80228 kubeadm.go:394] duration metric: took 8m2.599023024s to StartCluster
	I0814 17:45:40.071554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:45:40.071617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:45:40.115691   80228 cri.go:89] found id: ""
	I0814 17:45:40.115719   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.115727   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:45:40.115734   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:45:40.115798   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:45:40.155537   80228 cri.go:89] found id: ""
	I0814 17:45:40.155566   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.155574   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:45:40.155580   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:45:40.155645   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:45:40.189570   80228 cri.go:89] found id: ""
	I0814 17:45:40.189604   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.189616   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:45:40.189625   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:45:40.189708   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:45:40.222496   80228 cri.go:89] found id: ""
	I0814 17:45:40.222521   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.222528   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:45:40.222533   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:45:40.222590   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:45:40.255095   80228 cri.go:89] found id: ""
	I0814 17:45:40.255129   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.255142   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:45:40.255151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:45:40.255233   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:45:40.290297   80228 cri.go:89] found id: ""
	I0814 17:45:40.290326   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.290341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:45:40.290348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:45:40.290396   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:45:40.326660   80228 cri.go:89] found id: ""
	I0814 17:45:40.326685   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.326695   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:45:40.326701   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:45:40.326764   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:45:40.359867   80228 cri.go:89] found id: ""
	I0814 17:45:40.359896   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.359908   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:45:40.359918   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:45:40.359933   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:45:40.397513   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:45:40.397543   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:45:40.451744   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:45:40.451778   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:45:40.475817   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:45:40.475843   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:45:40.575913   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:45:40.575933   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:45:40.575946   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0814 17:45:40.683455   80228 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 17:45:40.683509   80228 out.go:239] * 
	W0814 17:45:40.683587   80228 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.683623   80228 out.go:239] * 
	W0814 17:45:40.684431   80228 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 17:45:40.688043   80228 out.go:177] 
	W0814 17:45:40.689238   80228 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.689291   80228 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 17:45:40.689318   80228 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 17:45:40.690913   80228 out.go:177] 
	
	
	==> CRI-O <==
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.835913088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c8a450c-2fe6-4190-8323-ffd287277fb4 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.837044766Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39aba424-b373-4698-a402-01bfb0b9e61e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.837388126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657932837365286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39aba424-b373-4698-a402-01bfb0b9e61e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.838199588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93a79b15-f359-47c6-8a8f-d5c5bc90a688 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.838249187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93a79b15-f359-47c6-8a8f-d5c5bc90a688 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.838629432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6411832275e2f94ebdb33c9b604c0362791bd2b6a2f6605f150a45653e325d4c,PodSandboxId:0d1171be4b2cdbe55c156b24a7b26d5e274d7315319fae670b86cfcf9865b035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657382263320602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc80ba99-eecf-4eb1-bd78-f88792cb3e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40074a3ac3bf30838f60f23a820c7f019349867b7cee0f905b6a5269f21d71,PodSandboxId:4df6341d4c94d9068260af133f0689b5adc0108677a2dd4bbdc216e3417c242a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381501668944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h4dmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f2fdca-15ba-430f-989f-3c569f33a76a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd85bbc0876fa7110310a46dc939feb47b1b471d7f091b294bdb265fe1f922b5,PodSandboxId:9a33b11104553d78ee84468c3fd39b6c21c397b9897af6afcf1a1e415ebcc3e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381268205264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b
0e3bf4-41d9-4151-8255-37881e596c20,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f86f7bd2800b70cb2d03417070b0d258c70f0a74abcf0ce14d441051eea33d8,PodSandboxId:2392f372cf1b920a66e520a8bc8efcc0eef2d04628c9149313392b98838ef050,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723657380705297314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6bps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9165c654-568f-4206-878c-f0c88ccd38cd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6471a23e249b3de7941e100ad508b6e0d1402f9cd161a4c799c6d899bfff010,PodSandboxId:6950ae89f5edc31e41d4d2c4c3cb1d74511ea7538e81269f451dea53148949b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657369806330384,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a248fb55c574b206d666259690ea8d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2db9a00effebd7f31ab18c8af6f07fbc41cdcc1ae3a4129284fb150cb914b5,PodSandboxId:67609ef7253a49b1ed4c8648d9599f4bca6bae2d483115669a443052e4ec8296,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657369820972
828,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d155167fb36f79ed629d90b68f623528,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8d7d31b1c602e5cc31a53745b8d294583ecfde3a12ac6d372c54d287bed915,PodSandboxId:505c3ee880b56b78659330f2def011258ae74c2008da0b590d72b28ad3865133,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657369809360353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6412917e9c19e52d0a896519458e8f07,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22898c56f39e5820c769ce0bf4038d54816b8f2cfe0a03e08482fd0311b34c02,PodSandboxId:b3fbe63d0b395e8ff81bf95aa50d953c6cd68f3b87439eeeaa3fe3b6109fa72e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657369736344265,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1eb47f90029ae493e6161685327809028a0363e9b595fca997396628067ba9,PodSandboxId:be5645e5ce93e1e6589d5d428d66361441b33cdea203ed9f3c8810db9262b676,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657089297749478,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93a79b15-f359-47c6-8a8f-d5c5bc90a688 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.872837416Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cdf01b08-3430-4034-be57-83d18de12c5f name=/runtime.v1.RuntimeService/Version
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.872914658Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cdf01b08-3430-4034-be57-83d18de12c5f name=/runtime.v1.RuntimeService/Version
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.873781559Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09e349a5-3cfd-459c-805e-17837e6f338c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.874107339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657932874086923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09e349a5-3cfd-459c-805e-17837e6f338c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.874849736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2820deb4-7af6-45de-8bc2-b70d71212201 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.874899783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2820deb4-7af6-45de-8bc2-b70d71212201 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.875098011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6411832275e2f94ebdb33c9b604c0362791bd2b6a2f6605f150a45653e325d4c,PodSandboxId:0d1171be4b2cdbe55c156b24a7b26d5e274d7315319fae670b86cfcf9865b035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657382263320602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc80ba99-eecf-4eb1-bd78-f88792cb3e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40074a3ac3bf30838f60f23a820c7f019349867b7cee0f905b6a5269f21d71,PodSandboxId:4df6341d4c94d9068260af133f0689b5adc0108677a2dd4bbdc216e3417c242a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381501668944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h4dmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f2fdca-15ba-430f-989f-3c569f33a76a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd85bbc0876fa7110310a46dc939feb47b1b471d7f091b294bdb265fe1f922b5,PodSandboxId:9a33b11104553d78ee84468c3fd39b6c21c397b9897af6afcf1a1e415ebcc3e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381268205264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b
0e3bf4-41d9-4151-8255-37881e596c20,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f86f7bd2800b70cb2d03417070b0d258c70f0a74abcf0ce14d441051eea33d8,PodSandboxId:2392f372cf1b920a66e520a8bc8efcc0eef2d04628c9149313392b98838ef050,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723657380705297314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6bps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9165c654-568f-4206-878c-f0c88ccd38cd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6471a23e249b3de7941e100ad508b6e0d1402f9cd161a4c799c6d899bfff010,PodSandboxId:6950ae89f5edc31e41d4d2c4c3cb1d74511ea7538e81269f451dea53148949b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657369806330384,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a248fb55c574b206d666259690ea8d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2db9a00effebd7f31ab18c8af6f07fbc41cdcc1ae3a4129284fb150cb914b5,PodSandboxId:67609ef7253a49b1ed4c8648d9599f4bca6bae2d483115669a443052e4ec8296,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657369820972
828,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d155167fb36f79ed629d90b68f623528,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8d7d31b1c602e5cc31a53745b8d294583ecfde3a12ac6d372c54d287bed915,PodSandboxId:505c3ee880b56b78659330f2def011258ae74c2008da0b590d72b28ad3865133,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657369809360353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6412917e9c19e52d0a896519458e8f07,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22898c56f39e5820c769ce0bf4038d54816b8f2cfe0a03e08482fd0311b34c02,PodSandboxId:b3fbe63d0b395e8ff81bf95aa50d953c6cd68f3b87439eeeaa3fe3b6109fa72e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657369736344265,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1eb47f90029ae493e6161685327809028a0363e9b595fca997396628067ba9,PodSandboxId:be5645e5ce93e1e6589d5d428d66361441b33cdea203ed9f3c8810db9262b676,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657089297749478,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2820deb4-7af6-45de-8bc2-b70d71212201 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.889610887Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20dd2f99-e3ad-42e1-b0cd-eed83b7dab3b name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.889866964Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8ceb689ac55f36f0038114c250ab0ac6b9eb561a251116fb07d574b85ff44ecc,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-7qljd,Uid:0f0e5d07-eb28-46b3-9270-554006151eda,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657382245652985,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-7qljd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f0e5d07-eb28-46b3-9270-554006151eda,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T17:43:01.938514642Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0d1171be4b2cdbe55c156b24a7b26d5e274d7315319fae670b86cfcf9865b035,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:bc80ba99-eecf-4eb1-bd78-f88792cb3e5a,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657382171192061,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc80ba99-eecf-4eb1-bd78-f88792cb3e5a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-14T17:43:01.860926130Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4df6341d4c94d9068260af133f0689b5adc0108677a2dd4bbdc216e3417c242a,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-h4dmc,Uid:33f2fdca-15ba-430f-989f-3c569f33a76a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657380622395165,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-h4dmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f2fdca-15ba-430f-989f-3c569f33a76a,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T17:43:00.300887546Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a33b11104553d78ee84468c3fd39b6c21c397b9897af6afcf1a1e415ebcc3e0,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-mpfqf,Uid:7b0e3bf4-41d9-4151-
8255-37881e596c20,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657380580590061,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b0e3bf4-41d9-4151-8255-37881e596c20,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T17:43:00.264971711Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2392f372cf1b920a66e520a8bc8efcc0eef2d04628c9149313392b98838ef050,Metadata:&PodSandboxMetadata{Name:kube-proxy-s6bps,Uid:9165c654-568f-4206-878c-f0c88ccd38cd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657380525730639,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-s6bps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9165c654-568f-4206-878c-f0c88ccd38cd,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T17:43:00.216725356Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:505c3ee880b56b78659330f2def011258ae74c2008da0b590d72b28ad3865133,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-545149,Uid:6412917e9c19e52d0a896519458e8f07,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657369587723993,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6412917e9c19e52d0a896519458e8f07,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.162:2379,kubernetes.io/config.hash: 6412917e9c19e52d0a896519458e8f07,kubernetes.io/config.seen: 2024-08-14T17:42:49.121676420Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b3fbe63d0b395e8ff81bf95aa50d953c6cd68f3b87439eeeaa3fe3b6109fa72e,Met
adata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-545149,Uid:dcf0ae35132362a5a7f1f7744a41f06a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723657369577199382,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.162:8443,kubernetes.io/config.hash: dcf0ae35132362a5a7f1f7744a41f06a,kubernetes.io/config.seen: 2024-08-14T17:42:49.121677630Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:67609ef7253a49b1ed4c8648d9599f4bca6bae2d483115669a443052e4ec8296,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-545149,Uid:d155167fb36f79ed629d90b68f623528,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657369563210123,Labels:map[string]strin
g{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d155167fb36f79ed629d90b68f623528,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d155167fb36f79ed629d90b68f623528,kubernetes.io/config.seen: 2024-08-14T17:42:49.121675033Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6950ae89f5edc31e41d4d2c4c3cb1d74511ea7538e81269f451dea53148949b7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-545149,Uid:00a248fb55c574b206d666259690ea8d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657369555875165,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a248fb55c574b206d666259690ea8d,tier: control-plane,},Annotations:map[string]strin
g{kubernetes.io/config.hash: 00a248fb55c574b206d666259690ea8d,kubernetes.io/config.seen: 2024-08-14T17:42:49.121670964Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=20dd2f99-e3ad-42e1-b0cd-eed83b7dab3b name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.890598688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5153e83e-4049-4955-9656-6d1ac6b10206 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.890646407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5153e83e-4049-4955-9656-6d1ac6b10206 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.890834449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6411832275e2f94ebdb33c9b604c0362791bd2b6a2f6605f150a45653e325d4c,PodSandboxId:0d1171be4b2cdbe55c156b24a7b26d5e274d7315319fae670b86cfcf9865b035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657382263320602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc80ba99-eecf-4eb1-bd78-f88792cb3e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40074a3ac3bf30838f60f23a820c7f019349867b7cee0f905b6a5269f21d71,PodSandboxId:4df6341d4c94d9068260af133f0689b5adc0108677a2dd4bbdc216e3417c242a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381501668944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h4dmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f2fdca-15ba-430f-989f-3c569f33a76a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd85bbc0876fa7110310a46dc939feb47b1b471d7f091b294bdb265fe1f922b5,PodSandboxId:9a33b11104553d78ee84468c3fd39b6c21c397b9897af6afcf1a1e415ebcc3e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381268205264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b
0e3bf4-41d9-4151-8255-37881e596c20,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f86f7bd2800b70cb2d03417070b0d258c70f0a74abcf0ce14d441051eea33d8,PodSandboxId:2392f372cf1b920a66e520a8bc8efcc0eef2d04628c9149313392b98838ef050,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723657380705297314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6bps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9165c654-568f-4206-878c-f0c88ccd38cd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6471a23e249b3de7941e100ad508b6e0d1402f9cd161a4c799c6d899bfff010,PodSandboxId:6950ae89f5edc31e41d4d2c4c3cb1d74511ea7538e81269f451dea53148949b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657369806330384,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a248fb55c574b206d666259690ea8d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2db9a00effebd7f31ab18c8af6f07fbc41cdcc1ae3a4129284fb150cb914b5,PodSandboxId:67609ef7253a49b1ed4c8648d9599f4bca6bae2d483115669a443052e4ec8296,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657369820972
828,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d155167fb36f79ed629d90b68f623528,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8d7d31b1c602e5cc31a53745b8d294583ecfde3a12ac6d372c54d287bed915,PodSandboxId:505c3ee880b56b78659330f2def011258ae74c2008da0b590d72b28ad3865133,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657369809360353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6412917e9c19e52d0a896519458e8f07,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22898c56f39e5820c769ce0bf4038d54816b8f2cfe0a03e08482fd0311b34c02,PodSandboxId:b3fbe63d0b395e8ff81bf95aa50d953c6cd68f3b87439eeeaa3fe3b6109fa72e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657369736344265,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5153e83e-4049-4955-9656-6d1ac6b10206 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.905622668Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d395971a-85ca-4554-b218-630b1c48364b name=/runtime.v1.RuntimeService/Version
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.905683648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d395971a-85ca-4554-b218-630b1c48364b name=/runtime.v1.RuntimeService/Version
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.906627376Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a65792a-ff13-454f-8b3f-7efd3006e71d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.907007325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657932906985504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a65792a-ff13-454f-8b3f-7efd3006e71d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.907577317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d9ac02a-00ce-4091-bcfe-4138e2259a08 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.907632354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d9ac02a-00ce-4091-bcfe-4138e2259a08 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:52:12 no-preload-545149 crio[722]: time="2024-08-14 17:52:12.907844310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6411832275e2f94ebdb33c9b604c0362791bd2b6a2f6605f150a45653e325d4c,PodSandboxId:0d1171be4b2cdbe55c156b24a7b26d5e274d7315319fae670b86cfcf9865b035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657382263320602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc80ba99-eecf-4eb1-bd78-f88792cb3e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40074a3ac3bf30838f60f23a820c7f019349867b7cee0f905b6a5269f21d71,PodSandboxId:4df6341d4c94d9068260af133f0689b5adc0108677a2dd4bbdc216e3417c242a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381501668944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h4dmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f2fdca-15ba-430f-989f-3c569f33a76a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd85bbc0876fa7110310a46dc939feb47b1b471d7f091b294bdb265fe1f922b5,PodSandboxId:9a33b11104553d78ee84468c3fd39b6c21c397b9897af6afcf1a1e415ebcc3e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381268205264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b
0e3bf4-41d9-4151-8255-37881e596c20,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f86f7bd2800b70cb2d03417070b0d258c70f0a74abcf0ce14d441051eea33d8,PodSandboxId:2392f372cf1b920a66e520a8bc8efcc0eef2d04628c9149313392b98838ef050,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723657380705297314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6bps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9165c654-568f-4206-878c-f0c88ccd38cd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6471a23e249b3de7941e100ad508b6e0d1402f9cd161a4c799c6d899bfff010,PodSandboxId:6950ae89f5edc31e41d4d2c4c3cb1d74511ea7538e81269f451dea53148949b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657369806330384,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a248fb55c574b206d666259690ea8d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2db9a00effebd7f31ab18c8af6f07fbc41cdcc1ae3a4129284fb150cb914b5,PodSandboxId:67609ef7253a49b1ed4c8648d9599f4bca6bae2d483115669a443052e4ec8296,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657369820972
828,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d155167fb36f79ed629d90b68f623528,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8d7d31b1c602e5cc31a53745b8d294583ecfde3a12ac6d372c54d287bed915,PodSandboxId:505c3ee880b56b78659330f2def011258ae74c2008da0b590d72b28ad3865133,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657369809360353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6412917e9c19e52d0a896519458e8f07,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22898c56f39e5820c769ce0bf4038d54816b8f2cfe0a03e08482fd0311b34c02,PodSandboxId:b3fbe63d0b395e8ff81bf95aa50d953c6cd68f3b87439eeeaa3fe3b6109fa72e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657369736344265,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1eb47f90029ae493e6161685327809028a0363e9b595fca997396628067ba9,PodSandboxId:be5645e5ce93e1e6589d5d428d66361441b33cdea203ed9f3c8810db9262b676,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657089297749478,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d9ac02a-00ce-4091-bcfe-4138e2259a08 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6411832275e2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   0d1171be4b2cd       storage-provisioner
	be40074a3ac3b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   4df6341d4c94d       coredns-6f6b679f8f-h4dmc
	fd85bbc0876fa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   9a33b11104553       coredns-6f6b679f8f-mpfqf
	6f86f7bd2800b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   2392f372cf1b9       kube-proxy-s6bps
	ad2db9a00effe       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   67609ef7253a4       kube-scheduler-no-preload-545149
	3a8d7d31b1c60       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   505c3ee880b56       etcd-no-preload-545149
	a6471a23e249b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   6950ae89f5edc       kube-controller-manager-no-preload-545149
	22898c56f39e5       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   b3fbe63d0b395       kube-apiserver-no-preload-545149
	1c1eb47f90029       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   be5645e5ce93e       kube-apiserver-no-preload-545149
	
	
	==> coredns [be40074a3ac3bf30838f60f23a820c7f019349867b7cee0f905b6a5269f21d71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [fd85bbc0876fa7110310a46dc939feb47b1b471d7f091b294bdb265fe1f922b5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-545149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-545149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=no-preload-545149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T17_42_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 17:42:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-545149
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 17:52:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 17:48:10 +0000   Wed, 14 Aug 2024 17:42:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 17:48:10 +0000   Wed, 14 Aug 2024 17:42:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 17:48:10 +0000   Wed, 14 Aug 2024 17:42:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 17:48:10 +0000   Wed, 14 Aug 2024 17:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    no-preload-545149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7de90293c47344b9b9852d77ef42a8b0
	  System UUID:                7de90293-c473-44b9-b985-2d77ef42a8b0
	  Boot ID:                    2862b156-9a6e-4776-85d9-1339de7d8568
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-h4dmc                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 coredns-6f6b679f8f-mpfqf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m13s
	  kube-system                 etcd-no-preload-545149                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-545149             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-no-preload-545149    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-s6bps                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	  kube-system                 kube-scheduler-no-preload-545149             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-6867b74b74-7qljd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m12s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m11s  kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node no-preload-545149 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node no-preload-545149 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node no-preload-545149 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s  node-controller  Node no-preload-545149 event: Registered Node no-preload-545149 in Controller
	
	
	==> dmesg <==
	[  +0.055340] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040430] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.008090] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.923686] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.542431] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.370494] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.062862] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054108] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.164053] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.145553] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.273958] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[Aug14 17:38] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.062384] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.832381] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +5.593544] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.358416] kauditd_printk_skb: 85 callbacks suppressed
	[Aug14 17:42] kauditd_printk_skb: 3 callbacks suppressed
	[ +12.534779] systemd-fstab-generator[3083]: Ignoring "noauto" option for root device
	[  +4.634101] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.442861] systemd-fstab-generator[3407]: Ignoring "noauto" option for root device
	[Aug14 17:43] systemd-fstab-generator[3544]: Ignoring "noauto" option for root device
	[  +0.093487] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.574942] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [3a8d7d31b1c602e5cc31a53745b8d294583ecfde3a12ac6d372c54d287bed915] <==
	{"level":"info","ts":"2024-08-14T17:42:50.194781Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-14T17:42:50.194984Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.162:2380"}
	{"level":"info","ts":"2024-08-14T17:42:50.195012Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.162:2380"}
	{"level":"info","ts":"2024-08-14T17:42:50.196605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 switched to configuration voters=(10800451076234521878)"}
	{"level":"info","ts":"2024-08-14T17:42:50.196711Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da8895e0fc3a6493","local-member-id":"95e2e907d4f1ad16","added-peer-id":"95e2e907d4f1ad16","added-peer-peer-urls":["https://192.168.39.162:2380"]}
	{"level":"info","ts":"2024-08-14T17:42:50.416502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-14T17:42:50.416559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-14T17:42:50.416603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 received MsgPreVoteResp from 95e2e907d4f1ad16 at term 1"}
	{"level":"info","ts":"2024-08-14T17:42:50.416622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 became candidate at term 2"}
	{"level":"info","ts":"2024-08-14T17:42:50.416630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 received MsgVoteResp from 95e2e907d4f1ad16 at term 2"}
	{"level":"info","ts":"2024-08-14T17:42:50.416663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 became leader at term 2"}
	{"level":"info","ts":"2024-08-14T17:42:50.416673Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 95e2e907d4f1ad16 elected leader 95e2e907d4f1ad16 at term 2"}
	{"level":"info","ts":"2024-08-14T17:42:50.420623Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:42:50.425711Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"95e2e907d4f1ad16","local-member-attributes":"{Name:no-preload-545149 ClientURLs:[https://192.168.39.162:2379]}","request-path":"/0/members/95e2e907d4f1ad16/attributes","cluster-id":"da8895e0fc3a6493","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-14T17:42:50.426167Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T17:42:50.433757Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:42:50.436794Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-14T17:42:50.436838Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-14T17:42:50.436850Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-14T17:42:50.437685Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:42:50.437772Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.162:2379"}
	{"level":"info","ts":"2024-08-14T17:42:50.438508Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T17:42:50.438900Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da8895e0fc3a6493","local-member-id":"95e2e907d4f1ad16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:42:50.445924Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:42:50.445979Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 17:52:13 up 14 min,  0 users,  load average: 0.11, 0.14, 0.10
	Linux no-preload-545149 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1c1eb47f90029ae493e6161685327809028a0363e9b595fca997396628067ba9] <==
	W0814 17:42:44.846201       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:44.874223       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:44.984898       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:44.995734       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.003225       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.074308       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.100152       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.104838       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.119529       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.120770       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.129704       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.136209       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.140729       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.159735       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.161152       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.171644       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.200894       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.202293       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.215925       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.290906       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.300692       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.326268       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.340094       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.441140       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.518389       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [22898c56f39e5820c769ce0bf4038d54816b8f2cfe0a03e08482fd0311b34c02] <==
	E0814 17:47:53.413182       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0814 17:47:53.413237       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0814 17:47:53.414481       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:47:53.414583       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:48:53.415548       1 handler_proxy.go:99] no RequestInfo found in the context
	W0814 17:48:53.415873       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:48:53.416037       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0814 17:48:53.416126       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 17:48:53.417219       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:48:53.417329       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:50:53.418199       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:50:53.418286       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0814 17:50:53.418460       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:50:53.418558       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 17:50:53.419859       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:50:53.419921       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a6471a23e249b3de7941e100ad508b6e0d1402f9cd161a4c799c6d899bfff010] <==
	E0814 17:46:59.304665       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:46:59.848579       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:47:29.311682       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:47:29.856888       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:47:59.318838       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:47:59.865341       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:48:10.360320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-545149"
	E0814 17:48:29.325687       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:48:29.874032       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:48:51.905641       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="270.446µs"
	E0814 17:48:59.332064       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:48:59.881759       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:49:03.902927       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="117.385µs"
	E0814 17:49:29.338873       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:49:29.892186       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:49:59.347188       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:49:59.900544       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:50:29.356725       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:50:29.908535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:50:59.364041       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:50:59.917259       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:51:29.371386       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:51:29.926535       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:51:59.377506       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:51:59.933710       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6f86f7bd2800b70cb2d03417070b0d258c70f0a74abcf0ce14d441051eea33d8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 17:43:01.022071       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 17:43:01.040199       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.162"]
	E0814 17:43:01.040294       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 17:43:01.200564       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 17:43:01.200615       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 17:43:01.200647       1 server_linux.go:169] "Using iptables Proxier"
	I0814 17:43:01.203735       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 17:43:01.204021       1 server.go:483] "Version info" version="v1.31.0"
	I0814 17:43:01.204055       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:43:01.208212       1 config.go:197] "Starting service config controller"
	I0814 17:43:01.208294       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 17:43:01.208330       1 config.go:104] "Starting endpoint slice config controller"
	I0814 17:43:01.208353       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 17:43:01.216185       1 config.go:326] "Starting node config controller"
	I0814 17:43:01.216221       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 17:43:01.308583       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 17:43:01.308657       1 shared_informer.go:320] Caches are synced for service config
	I0814 17:43:01.339444       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ad2db9a00effebd7f31ab18c8af6f07fbc41cdcc1ae3a4129284fb150cb914b5] <==
	W0814 17:42:52.423914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 17:42:52.423937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:52.424259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 17:42:52.424369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.265591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 17:42:53.265667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.279471       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 17:42:53.279544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.375651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 17:42:53.375734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.451238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 17:42:53.451383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.599186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 17:42:53.599304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.617352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 17:42:53.617462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.631807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 17:42:53.632375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.657940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 17:42:53.657989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.658685       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 17:42:53.658724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.856662       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 17:42:53.856708       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0814 17:42:55.515479       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 17:50:58 no-preload-545149 kubelet[3414]: E0814 17:50:58.888820    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7qljd" podUID="0f0e5d07-eb28-46b3-9270-554006151eda"
	Aug 14 17:51:05 no-preload-545149 kubelet[3414]: E0814 17:51:05.076241    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657865075937359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:05 no-preload-545149 kubelet[3414]: E0814 17:51:05.076602    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657865075937359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:09 no-preload-545149 kubelet[3414]: E0814 17:51:09.888649    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7qljd" podUID="0f0e5d07-eb28-46b3-9270-554006151eda"
	Aug 14 17:51:15 no-preload-545149 kubelet[3414]: E0814 17:51:15.077914    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657875077657243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:15 no-preload-545149 kubelet[3414]: E0814 17:51:15.077953    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657875077657243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:24 no-preload-545149 kubelet[3414]: E0814 17:51:24.888558    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7qljd" podUID="0f0e5d07-eb28-46b3-9270-554006151eda"
	Aug 14 17:51:25 no-preload-545149 kubelet[3414]: E0814 17:51:25.080278    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657885079948679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:25 no-preload-545149 kubelet[3414]: E0814 17:51:25.080308    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657885079948679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:35 no-preload-545149 kubelet[3414]: E0814 17:51:35.082665    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657895082229365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:35 no-preload-545149 kubelet[3414]: E0814 17:51:35.083038    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657895082229365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:36 no-preload-545149 kubelet[3414]: E0814 17:51:36.888623    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7qljd" podUID="0f0e5d07-eb28-46b3-9270-554006151eda"
	Aug 14 17:51:45 no-preload-545149 kubelet[3414]: E0814 17:51:45.085549    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657905085218229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:45 no-preload-545149 kubelet[3414]: E0814 17:51:45.085931    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657905085218229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:51 no-preload-545149 kubelet[3414]: E0814 17:51:51.888138    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7qljd" podUID="0f0e5d07-eb28-46b3-9270-554006151eda"
	Aug 14 17:51:54 no-preload-545149 kubelet[3414]: E0814 17:51:54.910448    3414 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 17:51:54 no-preload-545149 kubelet[3414]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 17:51:54 no-preload-545149 kubelet[3414]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 17:51:54 no-preload-545149 kubelet[3414]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 17:51:54 no-preload-545149 kubelet[3414]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 17:51:55 no-preload-545149 kubelet[3414]: E0814 17:51:55.088222    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657915087355811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:51:55 no-preload-545149 kubelet[3414]: E0814 17:51:55.088301    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657915087355811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:52:05 no-preload-545149 kubelet[3414]: E0814 17:52:05.090045    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657925089770534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:52:05 no-preload-545149 kubelet[3414]: E0814 17:52:05.090107    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723657925089770534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:52:06 no-preload-545149 kubelet[3414]: E0814 17:52:06.889365    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7qljd" podUID="0f0e5d07-eb28-46b3-9270-554006151eda"
	
	
	==> storage-provisioner [6411832275e2f94ebdb33c9b604c0362791bd2b6a2f6605f150a45653e325d4c] <==
	I0814 17:43:02.376085       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 17:43:02.394234       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 17:43:02.394308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 17:43:02.410086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 17:43:02.411304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7670f961-0e1b-47fe-a4ba-c3344e080f56", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-545149_e685c9c5-9ca9-498b-ba4e-231abf101220 became leader
	I0814 17:43:02.411728       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-545149_e685c9c5-9ca9-498b-ba4e-231abf101220!
	I0814 17:43:02.512836       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-545149_e685c9c5-9ca9-498b-ba4e-231abf101220!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-545149 -n no-preload-545149
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-545149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-7qljd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-545149 describe pod metrics-server-6867b74b74-7qljd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-545149 describe pod metrics-server-6867b74b74-7qljd: exit status 1 (59.078276ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-7qljd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-545149 describe pod metrics-server-6867b74b74-7qljd: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:45:55.281991   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:46:05.661796   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:46:21.493494   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:46:25.080400   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:47:18.347922   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:47:19.605971   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:47:21.996401   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:47:48.145840   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:48:02.588960   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:48:13.865489   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:48:42.670682   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:48:45.060178   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:49:16.591301   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:49:29.460461   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:49:36.930735   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:49:58.429005   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:50:55.282262   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:51:25.080345   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:52:19.605316   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:52:21.997049   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:52:32.534448   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:53:02.588915   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:53:13.864895   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:54:16.591082   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:54:29.459534   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505584 -n old-k8s-version-505584
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 2 (234.43973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-505584" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 2 (219.78639ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-505584 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-505584 logs -n 25: (1.574798535s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-984053 sudo cat                              | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo find                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo crio                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-984053                                       | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-005029 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | disable-driver-mounts-005029                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:30 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-545149             | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-309673            | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-885666  | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC | 14 Aug 24 17:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC |                     |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-545149                  | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-505584        | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC | 14 Aug 24 17:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-309673                 | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-885666       | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:42 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-505584             | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 17:33:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 17:33:46.321266   80228 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:33:46.321519   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321529   80228 out.go:304] Setting ErrFile to fd 2...
	I0814 17:33:46.321533   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321691   80228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:33:46.322185   80228 out.go:298] Setting JSON to false
	I0814 17:33:46.323102   80228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8170,"bootTime":1723648656,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:33:46.323161   80228 start.go:139] virtualization: kvm guest
	I0814 17:33:46.325361   80228 out.go:177] * [old-k8s-version-505584] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:33:46.326668   80228 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:33:46.326679   80228 notify.go:220] Checking for updates...
	I0814 17:33:46.329217   80228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:33:46.330813   80228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:33:46.332019   80228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:33:46.333264   80228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:33:46.334480   80228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:33:46.336108   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:33:46.336521   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.336564   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.351154   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0814 17:33:46.351563   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.352042   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.352061   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.352395   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.352567   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.354248   80228 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 17:33:46.355547   80228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:33:46.355834   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.355865   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.370976   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
	I0814 17:33:46.371452   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.371977   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.372008   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.372376   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.372624   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.407797   80228 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 17:33:46.408905   80228 start.go:297] selected driver: kvm2
	I0814 17:33:46.408918   80228 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.409022   80228 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:33:46.409677   80228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.409753   80228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:33:46.424801   80228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:33:46.425288   80228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:33:46.425338   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:33:46.425349   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:33:46.425396   80228 start.go:340] cluster config:
	{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.425518   80228 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.427224   80228 out.go:177] * Starting "old-k8s-version-505584" primary control-plane node in "old-k8s-version-505584" cluster
	I0814 17:33:46.428485   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:33:46.428516   80228 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:33:46.428523   80228 cache.go:56] Caching tarball of preloaded images
	I0814 17:33:46.428589   80228 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:33:46.428600   80228 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 17:33:46.428727   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:33:46.428899   80228 start.go:360] acquireMachinesLock for old-k8s-version-505584: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:33:47.579625   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:50.651557   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:56.731587   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:59.803787   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:05.883582   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:08.959564   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:15.035593   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:18.107634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:24.187624   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:27.259634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:33.339631   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:36.411675   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:42.491633   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:45.563609   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:51.643582   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:54.715620   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:00.795564   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:03.867637   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:09.947634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:13.019646   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:19.099578   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:22.171640   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:28.251634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:31.323645   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:37.403627   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:40.475635   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:46.555591   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:49.627635   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:55.707632   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:58.779532   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:04.859619   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:07.931632   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:14.011612   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:17.083624   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:23.163638   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:26.235638   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:29.240279   79521 start.go:364] duration metric: took 4m23.88398072s to acquireMachinesLock for "embed-certs-309673"
	I0814 17:36:29.240341   79521 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:36:29.240351   79521 fix.go:54] fixHost starting: 
	I0814 17:36:29.240703   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:36:29.240730   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:36:29.255901   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0814 17:36:29.256372   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:36:29.256816   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:36:29.256839   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:36:29.257153   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:36:29.257337   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:29.257518   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:36:29.259382   79521 fix.go:112] recreateIfNeeded on embed-certs-309673: state=Stopped err=<nil>
	I0814 17:36:29.259419   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	W0814 17:36:29.259583   79521 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:36:29.261931   79521 out.go:177] * Restarting existing kvm2 VM for "embed-certs-309673" ...
	I0814 17:36:29.263301   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Start
	I0814 17:36:29.263487   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring networks are active...
	I0814 17:36:29.264251   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring network default is active
	I0814 17:36:29.264797   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring network mk-embed-certs-309673 is active
	I0814 17:36:29.265331   79521 main.go:141] libmachine: (embed-certs-309673) Getting domain xml...
	I0814 17:36:29.266055   79521 main.go:141] libmachine: (embed-certs-309673) Creating domain...
	I0814 17:36:29.237663   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:36:29.237704   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:36:29.238088   79367 buildroot.go:166] provisioning hostname "no-preload-545149"
	I0814 17:36:29.238131   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:36:29.238337   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:36:29.240159   79367 machine.go:97] duration metric: took 4m37.421920583s to provisionDockerMachine
	I0814 17:36:29.240195   79367 fix.go:56] duration metric: took 4m37.443181113s for fixHost
	I0814 17:36:29.240202   79367 start.go:83] releasing machines lock for "no-preload-545149", held for 4m37.443414836s
	W0814 17:36:29.240223   79367 start.go:714] error starting host: provision: host is not running
	W0814 17:36:29.240348   79367 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0814 17:36:29.240358   79367 start.go:729] Will try again in 5 seconds ...
	I0814 17:36:30.482377   79521 main.go:141] libmachine: (embed-certs-309673) Waiting to get IP...
	I0814 17:36:30.483405   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:30.483750   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:30.483837   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:30.483729   80776 retry.go:31] will retry after 224.900105ms: waiting for machine to come up
	I0814 17:36:30.710259   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:30.710718   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:30.710748   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:30.710679   80776 retry.go:31] will retry after 322.892012ms: waiting for machine to come up
	I0814 17:36:31.035358   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.035807   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.035835   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.035757   80776 retry.go:31] will retry after 374.226901ms: waiting for machine to come up
	I0814 17:36:31.411228   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.411783   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.411813   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.411717   80776 retry.go:31] will retry after 472.149905ms: waiting for machine to come up
	I0814 17:36:31.885265   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.885787   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.885810   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.885757   80776 retry.go:31] will retry after 676.063343ms: waiting for machine to come up
	I0814 17:36:32.563206   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:32.563711   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:32.563745   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:32.563658   80776 retry.go:31] will retry after 904.634039ms: waiting for machine to come up
	I0814 17:36:33.469832   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:33.470255   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:33.470278   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:33.470206   80776 retry.go:31] will retry after 1.132974911s: waiting for machine to come up
	I0814 17:36:34.605040   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:34.605542   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:34.605576   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:34.605498   80776 retry.go:31] will retry after 1.210457498s: waiting for machine to come up
	I0814 17:36:34.242590   79367 start.go:360] acquireMachinesLock for no-preload-545149: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:36:35.817809   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:35.818152   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:35.818177   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:35.818111   80776 retry.go:31] will retry after 1.275236618s: waiting for machine to come up
	I0814 17:36:37.095551   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:37.095975   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:37.096001   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:37.095937   80776 retry.go:31] will retry after 1.716925001s: waiting for machine to come up
	I0814 17:36:38.814927   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:38.815916   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:38.815943   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:38.815864   80776 retry.go:31] will retry after 2.040428036s: waiting for machine to come up
	I0814 17:36:40.858640   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:40.859157   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:40.859188   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:40.859108   80776 retry.go:31] will retry after 2.259949864s: waiting for machine to come up
	I0814 17:36:43.120436   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:43.120913   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:43.120939   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:43.120879   80776 retry.go:31] will retry after 3.64334808s: waiting for machine to come up
	I0814 17:36:47.975977   79871 start.go:364] duration metric: took 3m52.18367446s to acquireMachinesLock for "default-k8s-diff-port-885666"
	I0814 17:36:47.976049   79871 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:36:47.976064   79871 fix.go:54] fixHost starting: 
	I0814 17:36:47.976457   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:36:47.976492   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:36:47.993513   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0814 17:36:47.993940   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:36:47.994480   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:36:47.994504   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:36:47.994815   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:36:47.995005   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:36:47.995181   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:36:47.996716   79871 fix.go:112] recreateIfNeeded on default-k8s-diff-port-885666: state=Stopped err=<nil>
	I0814 17:36:47.996755   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	W0814 17:36:47.996923   79871 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:36:47.998967   79871 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-885666" ...
	I0814 17:36:46.766908   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.767458   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has current primary IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.767500   79521 main.go:141] libmachine: (embed-certs-309673) Found IP for machine: 192.168.61.2
	I0814 17:36:46.767516   79521 main.go:141] libmachine: (embed-certs-309673) Reserving static IP address...
	I0814 17:36:46.767974   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "embed-certs-309673", mac: "52:54:00:ed:61:4e", ip: "192.168.61.2"} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.767993   79521 main.go:141] libmachine: (embed-certs-309673) Reserved static IP address: 192.168.61.2
	I0814 17:36:46.768006   79521 main.go:141] libmachine: (embed-certs-309673) DBG | skip adding static IP to network mk-embed-certs-309673 - found existing host DHCP lease matching {name: "embed-certs-309673", mac: "52:54:00:ed:61:4e", ip: "192.168.61.2"}
	I0814 17:36:46.768017   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Getting to WaitForSSH function...
	I0814 17:36:46.768023   79521 main.go:141] libmachine: (embed-certs-309673) Waiting for SSH to be available...
	I0814 17:36:46.770187   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.770517   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.770548   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.770612   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Using SSH client type: external
	I0814 17:36:46.770643   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa (-rw-------)
	I0814 17:36:46.770672   79521 main.go:141] libmachine: (embed-certs-309673) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:36:46.770697   79521 main.go:141] libmachine: (embed-certs-309673) DBG | About to run SSH command:
	I0814 17:36:46.770703   79521 main.go:141] libmachine: (embed-certs-309673) DBG | exit 0
	I0814 17:36:46.895078   79521 main.go:141] libmachine: (embed-certs-309673) DBG | SSH cmd err, output: <nil>: 
	I0814 17:36:46.895444   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetConfigRaw
	I0814 17:36:46.896033   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:46.898715   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.899085   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.899117   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.899434   79521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/config.json ...
	I0814 17:36:46.899701   79521 machine.go:94] provisionDockerMachine start ...
	I0814 17:36:46.899723   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:46.899906   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:46.901985   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.902244   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.902268   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.902398   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:46.902564   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:46.902707   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:46.902829   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:46.902966   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:46.903201   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:46.903213   79521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:36:47.007289   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:36:47.007313   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.007589   79521 buildroot.go:166] provisioning hostname "embed-certs-309673"
	I0814 17:36:47.007608   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.007802   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.010311   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.010631   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.010670   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.010805   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.010956   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.011067   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.011160   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.011269   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.011455   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.011467   79521 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-309673 && echo "embed-certs-309673" | sudo tee /etc/hostname
	I0814 17:36:47.128575   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-309673
	
	I0814 17:36:47.128601   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.131125   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.131464   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.131493   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.131655   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.131970   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.132146   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.132286   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.132457   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.132614   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.132630   79521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-309673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-309673/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-309673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:36:47.247426   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:36:47.247469   79521 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:36:47.247486   79521 buildroot.go:174] setting up certificates
	I0814 17:36:47.247496   79521 provision.go:84] configureAuth start
	I0814 17:36:47.247506   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.247768   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:47.250616   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.250993   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.251018   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.251148   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.253149   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.253436   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.253465   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.253551   79521 provision.go:143] copyHostCerts
	I0814 17:36:47.253616   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:36:47.253628   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:36:47.253703   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:36:47.253817   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:36:47.253835   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:36:47.253875   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:36:47.253952   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:36:47.253962   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:36:47.253994   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:36:47.254060   79521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.embed-certs-309673 san=[127.0.0.1 192.168.61.2 embed-certs-309673 localhost minikube]
	I0814 17:36:47.338831   79521 provision.go:177] copyRemoteCerts
	I0814 17:36:47.338892   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:36:47.338921   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.341582   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.341897   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.341915   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.342053   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.342237   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.342374   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.342497   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.424777   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:36:47.446682   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:36:47.467672   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:36:47.488423   79521 provision.go:87] duration metric: took 240.914172ms to configureAuth
	I0814 17:36:47.488453   79521 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:36:47.488645   79521 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:36:47.488733   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.491453   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.491793   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.491816   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.492028   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.492216   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.492351   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.492479   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.492716   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.492909   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.492931   79521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:36:47.746210   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:36:47.746248   79521 machine.go:97] duration metric: took 846.530779ms to provisionDockerMachine
	I0814 17:36:47.746260   79521 start.go:293] postStartSetup for "embed-certs-309673" (driver="kvm2")
	I0814 17:36:47.746274   79521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:36:47.746297   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.746659   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:36:47.746694   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.749342   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.749674   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.749702   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.749831   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.750004   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.750126   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.750272   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.833279   79521 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:36:47.837076   79521 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:36:47.837099   79521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:36:47.837183   79521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:36:47.837269   79521 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:36:47.837387   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:36:47.845640   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:36:47.866978   79521 start.go:296] duration metric: took 120.70557ms for postStartSetup
	I0814 17:36:47.867012   79521 fix.go:56] duration metric: took 18.626661733s for fixHost
	I0814 17:36:47.867030   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.869687   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.870016   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.870046   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.870220   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.870399   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.870660   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.870827   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.870999   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.871209   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.871221   79521 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:36:47.975817   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657007.950271601
	
	I0814 17:36:47.975848   79521 fix.go:216] guest clock: 1723657007.950271601
	I0814 17:36:47.975860   79521 fix.go:229] Guest: 2024-08-14 17:36:47.950271601 +0000 UTC Remote: 2024-08-14 17:36:47.867016056 +0000 UTC m=+282.648397849 (delta=83.255545ms)
	I0814 17:36:47.975889   79521 fix.go:200] guest clock delta is within tolerance: 83.255545ms
	I0814 17:36:47.975896   79521 start.go:83] releasing machines lock for "embed-certs-309673", held for 18.735575335s
	I0814 17:36:47.975931   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.976213   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:47.978934   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.979457   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.979483   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.979625   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980134   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980303   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980382   79521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:36:47.980428   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.980574   79521 ssh_runner.go:195] Run: cat /version.json
	I0814 17:36:47.980603   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.983247   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983557   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983649   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.983687   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983828   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.984032   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.984042   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.984063   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.984183   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.984232   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.984320   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.984412   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.984467   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.984608   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:48.064891   79521 ssh_runner.go:195] Run: systemctl --version
	I0814 17:36:48.101403   79521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:36:48.239841   79521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:36:48.245634   79521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:36:48.245718   79521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:36:48.260517   79521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:36:48.260543   79521 start.go:495] detecting cgroup driver to use...
	I0814 17:36:48.260597   79521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:36:48.275003   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:36:48.290316   79521 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:36:48.290376   79521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:36:48.304351   79521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:36:48.320954   79521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:36:48.434176   79521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:36:48.582137   79521 docker.go:233] disabling docker service ...
	I0814 17:36:48.582217   79521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:36:48.595784   79521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:36:48.608379   79521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:36:48.735500   79521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:36:48.876194   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:36:48.891826   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:36:48.910820   79521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:36:48.910887   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.921125   79521 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:36:48.921198   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.931615   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.942779   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.953124   79521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:36:48.963454   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.974457   79521 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.991583   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:49.006059   79521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:36:49.015586   79521 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:36:49.015649   79521 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:36:49.028742   79521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:36:49.038126   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:36:49.155387   79521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:36:49.318598   79521 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:36:49.318679   79521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:36:49.323575   79521 start.go:563] Will wait 60s for crictl version
	I0814 17:36:49.323636   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:36:49.327233   79521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:36:49.369724   79521 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:36:49.369814   79521 ssh_runner.go:195] Run: crio --version
	I0814 17:36:49.399516   79521 ssh_runner.go:195] Run: crio --version
	I0814 17:36:49.431594   79521 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:36:49.432940   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:49.435776   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:49.436168   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:49.436199   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:49.436447   79521 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 17:36:49.440606   79521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:36:49.453159   79521 kubeadm.go:883] updating cluster {Name:embed-certs-309673 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:36:49.453272   79521 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:36:49.453311   79521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:36:49.486635   79521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:36:49.486708   79521 ssh_runner.go:195] Run: which lz4
	I0814 17:36:49.490626   79521 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:36:49.494822   79521 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:36:49.494852   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:36:48.000271   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Start
	I0814 17:36:48.000453   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring networks are active...
	I0814 17:36:48.001246   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring network default is active
	I0814 17:36:48.001621   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring network mk-default-k8s-diff-port-885666 is active
	I0814 17:36:48.002158   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Getting domain xml...
	I0814 17:36:48.002982   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Creating domain...
	I0814 17:36:49.272729   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting to get IP...
	I0814 17:36:49.273726   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.274182   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.274273   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.274157   80921 retry.go:31] will retry after 208.258845ms: waiting for machine to come up
	I0814 17:36:49.483781   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.484251   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.484278   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.484211   80921 retry.go:31] will retry after 318.193974ms: waiting for machine to come up
	I0814 17:36:49.803815   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.804311   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.804339   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.804277   80921 retry.go:31] will retry after 426.023242ms: waiting for machine to come up
	I0814 17:36:50.232060   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.232610   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.232646   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:50.232519   80921 retry.go:31] will retry after 534.392065ms: waiting for machine to come up
	I0814 17:36:50.745416   79521 crio.go:462] duration metric: took 1.254815826s to copy over tarball
	I0814 17:36:50.745515   79521 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:36:52.865848   79521 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.120299454s)
	I0814 17:36:52.865879   79521 crio.go:469] duration metric: took 2.120437156s to extract the tarball
	I0814 17:36:52.865887   79521 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:36:52.901808   79521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:36:52.946366   79521 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:36:52.946386   79521 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:36:52.946394   79521 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.31.0 crio true true} ...
	I0814 17:36:52.946492   79521 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-309673 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:36:52.946556   79521 ssh_runner.go:195] Run: crio config
	I0814 17:36:52.992520   79521 cni.go:84] Creating CNI manager for ""
	I0814 17:36:52.992541   79521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:36:52.992553   79521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:36:52.992577   79521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-309673 NodeName:embed-certs-309673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:36:52.992740   79521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-309673"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:36:52.992811   79521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:36:53.002460   79521 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:36:53.002539   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:36:53.011167   79521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 17:36:53.026436   79521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:36:53.041728   79521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0814 17:36:53.059102   79521 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0814 17:36:53.062728   79521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:36:53.073803   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:36:53.200870   79521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:36:53.217448   79521 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673 for IP: 192.168.61.2
	I0814 17:36:53.217472   79521 certs.go:194] generating shared ca certs ...
	I0814 17:36:53.217495   79521 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:36:53.217694   79521 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:36:53.217755   79521 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:36:53.217766   79521 certs.go:256] generating profile certs ...
	I0814 17:36:53.217876   79521 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/client.key
	I0814 17:36:53.217961   79521 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.key.83510bb8
	I0814 17:36:53.218034   79521 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.key
	I0814 17:36:53.218202   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:36:53.218248   79521 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:36:53.218272   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:36:53.218309   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:36:53.218343   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:36:53.218380   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:36:53.218447   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:36:53.219187   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:36:53.273437   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:36:53.307566   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:36:53.330107   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:36:53.360324   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0814 17:36:53.386974   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 17:36:53.409537   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:36:53.433873   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:36:53.456408   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:36:53.478233   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:36:53.500264   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:36:53.522440   79521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:36:53.538977   79521 ssh_runner.go:195] Run: openssl version
	I0814 17:36:53.544866   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:36:53.555085   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.559340   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.559399   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.565106   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:36:53.575561   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:36:53.585605   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.589838   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.589911   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.595165   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:36:53.604934   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:36:53.615153   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.619362   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.619435   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.624949   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:36:53.635459   79521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:36:53.639814   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:36:53.645419   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:36:53.651013   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:36:53.657004   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:36:53.662540   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:36:53.668187   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:36:53.673762   79521 kubeadm.go:392] StartCluster: {Name:embed-certs-309673 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:36:53.673867   79521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:36:53.673930   79521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:36:53.709404   79521 cri.go:89] found id: ""
	I0814 17:36:53.709490   79521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:36:53.719041   79521 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:36:53.719068   79521 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:36:53.719123   79521 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:36:53.728077   79521 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:36:53.729030   79521 kubeconfig.go:125] found "embed-certs-309673" server: "https://192.168.61.2:8443"
	I0814 17:36:53.730943   79521 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:36:53.739841   79521 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0814 17:36:53.739872   79521 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:36:53.739886   79521 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:36:53.739947   79521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:36:53.777400   79521 cri.go:89] found id: ""
	I0814 17:36:53.777476   79521 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:36:53.792838   79521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:36:53.802189   79521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:36:53.802223   79521 kubeadm.go:157] found existing configuration files:
	
	I0814 17:36:53.802278   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:36:53.813778   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:36:53.813854   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:36:53.825962   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:36:53.834929   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:36:53.834987   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:36:53.846315   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:36:53.855138   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:36:53.855206   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:36:53.864109   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:36:53.872613   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:36:53.872672   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:36:53.881307   79521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:36:53.890148   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.002103   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.664940   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.868608   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.932317   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:55.006430   79521 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:36:55.006523   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:50.768099   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.768599   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.768629   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:50.768554   80921 retry.go:31] will retry after 487.741283ms: waiting for machine to come up
	I0814 17:36:51.258499   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:51.259020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:51.259047   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:51.258975   80921 retry.go:31] will retry after 831.435484ms: waiting for machine to come up
	I0814 17:36:52.091900   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:52.092297   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:52.092351   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:52.092249   80921 retry.go:31] will retry after 1.067858402s: waiting for machine to come up
	I0814 17:36:53.161928   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:53.162393   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:53.162449   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:53.162366   80921 retry.go:31] will retry after 1.33971606s: waiting for machine to come up
	I0814 17:36:54.503810   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:54.504184   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:54.504214   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:54.504121   80921 retry.go:31] will retry after 1.4882184s: waiting for machine to come up
	I0814 17:36:55.506634   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:56.007367   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:56.507265   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:57.007343   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:57.026436   79521 api_server.go:72] duration metric: took 2.020005984s to wait for apiserver process to appear ...
	I0814 17:36:57.026471   79521 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:36:57.026496   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:36:55.994824   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:55.995255   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:55.995283   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:55.995206   80921 retry.go:31] will retry after 1.65461779s: waiting for machine to come up
	I0814 17:36:57.651449   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:57.651837   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:57.651867   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:57.651794   80921 retry.go:31] will retry after 2.38071296s: waiting for machine to come up
	I0814 17:37:00.033719   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:00.034261   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:37:00.034290   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:37:00.034204   80921 retry.go:31] will retry after 3.476533232s: waiting for machine to come up
	I0814 17:37:00.329636   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:00.329674   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:00.329689   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:00.357287   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:00.357334   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:00.527150   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:00.536020   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:00.536058   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:01.026558   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:01.034241   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:01.034271   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:01.526814   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:01.536226   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:01.536267   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:02.026791   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:02.031068   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 200:
	ok
	I0814 17:37:02.037240   79521 api_server.go:141] control plane version: v1.31.0
	I0814 17:37:02.037266   79521 api_server.go:131] duration metric: took 5.010786446s to wait for apiserver health ...
	I0814 17:37:02.037278   79521 cni.go:84] Creating CNI manager for ""
	I0814 17:37:02.037286   79521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:02.039248   79521 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:37:02.040543   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:37:02.050754   79521 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:37:02.067333   79521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:37:02.076082   79521 system_pods.go:59] 8 kube-system pods found
	I0814 17:37:02.076115   79521 system_pods.go:61] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:37:02.076130   79521 system_pods.go:61] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:37:02.076139   79521 system_pods.go:61] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:37:02.076178   79521 system_pods.go:61] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:37:02.076198   79521 system_pods.go:61] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:37:02.076218   79521 system_pods.go:61] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:37:02.076233   79521 system_pods.go:61] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:37:02.076242   79521 system_pods.go:61] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:37:02.076253   79521 system_pods.go:74] duration metric: took 8.901356ms to wait for pod list to return data ...
	I0814 17:37:02.076266   79521 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:37:02.080064   79521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:37:02.080087   79521 node_conditions.go:123] node cpu capacity is 2
	I0814 17:37:02.080101   79521 node_conditions.go:105] duration metric: took 3.829329ms to run NodePressure ...
	I0814 17:37:02.080121   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:02.359163   79521 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:37:02.368689   79521 kubeadm.go:739] kubelet initialised
	I0814 17:37:02.368717   79521 kubeadm.go:740] duration metric: took 9.524301ms waiting for restarted kubelet to initialise ...
	I0814 17:37:02.368728   79521 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:02.376056   79521 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.381317   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.381347   79521 pod_ready.go:81] duration metric: took 5.262062ms for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.381359   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.381370   79521 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.386799   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "etcd-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.386822   79521 pod_ready.go:81] duration metric: took 5.440585ms for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.386832   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "etcd-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.386838   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.392829   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.392853   79521 pod_ready.go:81] duration metric: took 6.003762ms for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.392864   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.392874   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.470943   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.470975   79521 pod_ready.go:81] duration metric: took 78.089715ms for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.470984   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.470996   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.870134   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-proxy-z8x9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.870163   79521 pod_ready.go:81] duration metric: took 399.157385ms for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.870175   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-proxy-z8x9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.870183   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:03.270805   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.270837   79521 pod_ready.go:81] duration metric: took 400.647029ms for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:03.270848   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.270856   79521 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:03.671023   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.671058   79521 pod_ready.go:81] duration metric: took 400.191147ms for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:03.671070   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.671079   79521 pod_ready.go:38] duration metric: took 1.302340033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:03.671098   79521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:37:03.683676   79521 ops.go:34] apiserver oom_adj: -16
	I0814 17:37:03.683701   79521 kubeadm.go:597] duration metric: took 9.964625256s to restartPrimaryControlPlane
	I0814 17:37:03.683712   79521 kubeadm.go:394] duration metric: took 10.009956133s to StartCluster
	I0814 17:37:03.683729   79521 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:03.683809   79521 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:03.685474   79521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:03.685708   79521 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:37:03.685766   79521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:37:03.685850   79521 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-309673"
	I0814 17:37:03.685862   79521 addons.go:69] Setting default-storageclass=true in profile "embed-certs-309673"
	I0814 17:37:03.685900   79521 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-309673"
	I0814 17:37:03.685907   79521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-309673"
	W0814 17:37:03.685911   79521 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:37:03.685933   79521 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:03.685933   79521 addons.go:69] Setting metrics-server=true in profile "embed-certs-309673"
	I0814 17:37:03.685988   79521 addons.go:234] Setting addon metrics-server=true in "embed-certs-309673"
	W0814 17:37:03.686006   79521 addons.go:243] addon metrics-server should already be in state true
	I0814 17:37:03.685945   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.686076   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.686284   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686362   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.686391   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686422   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.686482   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686538   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.687598   79521 out.go:177] * Verifying Kubernetes components...
	I0814 17:37:03.688995   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:03.701610   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I0814 17:37:03.702174   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.702789   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.702817   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.703223   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.703682   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.704077   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0814 17:37:03.704508   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.704864   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0814 17:37:03.705141   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.705154   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.705224   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.705473   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.705656   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.705670   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.706806   79521 addons.go:234] Setting addon default-storageclass=true in "embed-certs-309673"
	W0814 17:37:03.706824   79521 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:37:03.706851   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.707093   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.707112   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.707420   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.707536   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.707584   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.708017   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.708079   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.722383   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0814 17:37:03.722779   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.723288   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.723307   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.728799   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0814 17:37:03.728839   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38781
	I0814 17:37:03.728928   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.729426   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.729495   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.729776   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.729809   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.729951   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.729951   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.729967   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.729973   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.730360   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.730371   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.730698   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.730749   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.732979   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.733596   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.735250   79521 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:03.735262   79521 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:37:03.736576   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:37:03.736593   79521 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:37:03.736607   79521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:37:03.736612   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.736620   79521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:37:03.736637   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.740008   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740123   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740491   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.740558   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740676   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.740819   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.740842   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740872   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.740994   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.741120   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.741160   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.741527   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.741692   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.741817   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.749144   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0814 17:37:03.749482   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.749914   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.749929   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.750267   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.750467   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.752107   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.752325   79521 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:37:03.752339   79521 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:37:03.752360   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.754559   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.754845   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.754859   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.755073   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.755247   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.755402   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.755529   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.877535   79521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:03.897022   79521 node_ready.go:35] waiting up to 6m0s for node "embed-certs-309673" to be "Ready" ...
	I0814 17:37:03.951512   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:37:03.988066   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:37:03.988085   79521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:37:04.014925   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:37:04.025506   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:37:04.025531   79521 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:37:04.072457   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:37:04.072480   79521 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:37:04.104804   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:37:05.067867   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.116315804s)
	I0814 17:37:05.067888   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.052939793s)
	I0814 17:37:05.067925   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.067935   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068000   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068023   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068241   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068322   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068336   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068345   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068364   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068454   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068485   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068497   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068505   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068518   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068795   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068815   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068823   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068830   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068872   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068905   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.087716   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.087746   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.088086   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.088106   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.113388   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.008529856s)
	I0814 17:37:05.113441   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.113458   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.113736   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.113787   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.113800   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.113812   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.113823   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.114057   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.114071   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.114081   79521 addons.go:475] Verifying addon metrics-server=true in "embed-certs-309673"
	I0814 17:37:05.114163   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.116443   79521 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 17:37:05.118087   79521 addons.go:510] duration metric: took 1.432323959s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 17:37:03.512364   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:03.512842   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:37:03.512880   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:37:03.512785   80921 retry.go:31] will retry after 4.358649621s: waiting for machine to come up
	I0814 17:37:09.324026   80228 start.go:364] duration metric: took 3m22.895078586s to acquireMachinesLock for "old-k8s-version-505584"
	I0814 17:37:09.324085   80228 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:09.324101   80228 fix.go:54] fixHost starting: 
	I0814 17:37:09.324533   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:09.324575   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:09.344085   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
	I0814 17:37:09.344490   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:09.344980   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:37:09.345006   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:09.345416   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:09.345674   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:09.345842   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetState
	I0814 17:37:09.347489   80228 fix.go:112] recreateIfNeeded on old-k8s-version-505584: state=Stopped err=<nil>
	I0814 17:37:09.347511   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	W0814 17:37:09.347696   80228 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:09.349747   80228 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-505584" ...
	I0814 17:37:05.901013   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:07.901054   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:07.876377   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.876820   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has current primary IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.876845   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Found IP for machine: 192.168.50.184
	I0814 17:37:07.876857   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Reserving static IP address...
	I0814 17:37:07.877281   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-885666", mac: "52:54:00:f8:cc:3c", ip: "192.168.50.184"} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:07.877300   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Reserved static IP address: 192.168.50.184
	I0814 17:37:07.877320   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | skip adding static IP to network mk-default-k8s-diff-port-885666 - found existing host DHCP lease matching {name: "default-k8s-diff-port-885666", mac: "52:54:00:f8:cc:3c", ip: "192.168.50.184"}
	I0814 17:37:07.877339   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Getting to WaitForSSH function...
	I0814 17:37:07.877355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for SSH to be available...
	I0814 17:37:07.879843   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.880200   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:07.880242   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.880419   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Using SSH client type: external
	I0814 17:37:07.880445   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa (-rw-------)
	I0814 17:37:07.880496   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:07.880517   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | About to run SSH command:
	I0814 17:37:07.880549   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | exit 0
	I0814 17:37:08.007553   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:08.007929   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetConfigRaw
	I0814 17:37:08.009171   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:08.012358   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.012772   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.012804   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.013076   79871 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/config.json ...
	I0814 17:37:08.013284   79871 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:08.013310   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:08.013579   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.015965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.016325   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.016363   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.016491   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.016680   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.016873   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.017004   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.017140   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.017354   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.017376   79871 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:08.132369   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:08.132404   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.132657   79871 buildroot.go:166] provisioning hostname "default-k8s-diff-port-885666"
	I0814 17:37:08.132695   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.132906   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.136230   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.136669   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.136696   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.136937   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.137163   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.137350   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.137500   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.137672   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.137878   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.137900   79871 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-885666 && echo "default-k8s-diff-port-885666" | sudo tee /etc/hostname
	I0814 17:37:08.273593   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-885666
	
	I0814 17:37:08.273626   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.276470   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.276830   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.276862   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.277137   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.277382   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.277547   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.277713   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.277855   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.278052   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.278072   79871 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-885666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-885666/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-885666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:08.401522   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:08.401556   79871 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:08.401602   79871 buildroot.go:174] setting up certificates
	I0814 17:37:08.401626   79871 provision.go:84] configureAuth start
	I0814 17:37:08.401650   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.401963   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:08.404855   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.405251   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.405285   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.405521   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.407826   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.408338   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.408371   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.408515   79871 provision.go:143] copyHostCerts
	I0814 17:37:08.408583   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:08.408597   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:08.408681   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:08.408812   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:08.408823   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:08.408861   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:08.408947   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:08.408956   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:08.408984   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:08.409064   79871 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-885666 san=[127.0.0.1 192.168.50.184 default-k8s-diff-port-885666 localhost minikube]
	I0814 17:37:08.613459   79871 provision.go:177] copyRemoteCerts
	I0814 17:37:08.613530   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:08.613575   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.616704   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.617044   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.617072   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.617324   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.617515   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.617698   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.617844   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:08.705505   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:08.728835   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0814 17:37:08.751995   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:08.774577   79871 provision.go:87] duration metric: took 372.933752ms to configureAuth
	I0814 17:37:08.774609   79871 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:08.774812   79871 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:08.774880   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.777840   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.778235   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.778260   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.778527   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.778752   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.778899   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.779020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.779162   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.779437   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.779458   79871 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:09.055900   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:09.055927   79871 machine.go:97] duration metric: took 1.04262996s to provisionDockerMachine
	I0814 17:37:09.055943   79871 start.go:293] postStartSetup for "default-k8s-diff-port-885666" (driver="kvm2")
	I0814 17:37:09.055957   79871 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:09.055982   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.056325   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:09.056355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.059396   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.059853   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.059888   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.060064   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.060280   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.060558   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.060745   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.150649   79871 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:09.155263   79871 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:09.155295   79871 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:09.155400   79871 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:09.155500   79871 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:09.155623   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:09.167051   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:09.197223   79871 start.go:296] duration metric: took 141.264897ms for postStartSetup
	I0814 17:37:09.197324   79871 fix.go:56] duration metric: took 21.221265818s for fixHost
	I0814 17:37:09.197356   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.201388   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.201965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.202011   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.202109   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.202354   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.202569   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.202800   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.203010   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:09.203196   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:09.203209   79871 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:09.323868   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657029.302975780
	
	I0814 17:37:09.323892   79871 fix.go:216] guest clock: 1723657029.302975780
	I0814 17:37:09.323900   79871 fix.go:229] Guest: 2024-08-14 17:37:09.30297578 +0000 UTC Remote: 2024-08-14 17:37:09.197335302 +0000 UTC m=+253.546385360 (delta=105.640478ms)
	I0814 17:37:09.323918   79871 fix.go:200] guest clock delta is within tolerance: 105.640478ms
	I0814 17:37:09.323923   79871 start.go:83] releasing machines lock for "default-k8s-diff-port-885666", held for 21.347903434s
	I0814 17:37:09.323948   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.324209   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:09.327260   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.327802   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.327833   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.327993   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328500   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328727   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328814   79871 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:09.328862   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.328955   79871 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:09.328972   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.331813   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332081   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332233   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.332274   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332365   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.332490   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.332512   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332555   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.332669   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.332761   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.332824   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.332882   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.332926   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.333021   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.416041   79871 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:09.456024   79871 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:09.604623   79871 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:09.610562   79871 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:09.610624   79871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:09.627298   79871 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:09.627344   79871 start.go:495] detecting cgroup driver to use...
	I0814 17:37:09.627418   79871 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:09.648212   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:09.666047   79871 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:09.666107   79871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:09.681875   79871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:09.695920   79871 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:09.824502   79871 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:09.979561   79871 docker.go:233] disabling docker service ...
	I0814 17:37:09.979658   79871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:09.996877   79871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:10.014264   79871 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:10.166653   79871 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:10.288261   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:10.301868   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:10.320716   79871 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:37:10.320788   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.331099   79871 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:10.331158   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.342841   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.353762   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.364604   79871 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:10.376521   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.386787   79871 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.406713   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.418047   79871 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:10.428368   79871 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:10.428433   79871 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:10.442759   79871 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:10.452993   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:10.563097   79871 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:10.716953   79871 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:10.717031   79871 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:10.722685   79871 start.go:563] Will wait 60s for crictl version
	I0814 17:37:10.722759   79871 ssh_runner.go:195] Run: which crictl
	I0814 17:37:10.726621   79871 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:10.764534   79871 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:10.764628   79871 ssh_runner.go:195] Run: crio --version
	I0814 17:37:10.791513   79871 ssh_runner.go:195] Run: crio --version
	I0814 17:37:10.822380   79871 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:37:09.351136   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .Start
	I0814 17:37:09.351338   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring networks are active...
	I0814 17:37:09.352075   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network default is active
	I0814 17:37:09.352333   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network mk-old-k8s-version-505584 is active
	I0814 17:37:09.352701   80228 main.go:141] libmachine: (old-k8s-version-505584) Getting domain xml...
	I0814 17:37:09.353363   80228 main.go:141] libmachine: (old-k8s-version-505584) Creating domain...
	I0814 17:37:10.664390   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting to get IP...
	I0814 17:37:10.665484   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.665915   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.665980   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.665888   81116 retry.go:31] will retry after 285.047327ms: waiting for machine to come up
	I0814 17:37:10.952552   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.953009   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.953036   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.952973   81116 retry.go:31] will retry after 281.728141ms: waiting for machine to come up
	I0814 17:37:11.236576   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.237153   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.237192   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.237079   81116 retry.go:31] will retry after 341.673836ms: waiting for machine to come up
	I0814 17:37:10.401790   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:11.400713   79521 node_ready.go:49] node "embed-certs-309673" has status "Ready":"True"
	I0814 17:37:11.400742   79521 node_ready.go:38] duration metric: took 7.503686271s for node "embed-certs-309673" to be "Ready" ...
	I0814 17:37:11.400755   79521 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:11.408217   79521 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:11.414215   79521 pod_ready.go:92] pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:11.414244   79521 pod_ready.go:81] duration metric: took 5.997997ms for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:11.414256   79521 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:13.420804   79521 pod_ready.go:102] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:10.824020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:10.827965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:10.828426   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:10.828465   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:10.828807   79871 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:10.833261   79871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:10.846928   79871 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-885666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:10.847080   79871 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:37:10.847142   79871 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:10.889355   79871 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:37:10.889453   79871 ssh_runner.go:195] Run: which lz4
	I0814 17:37:10.894405   79871 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:37:10.898992   79871 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:10.899029   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:37:12.155402   79871 crio.go:462] duration metric: took 1.261016682s to copy over tarball
	I0814 17:37:12.155485   79871 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:14.344118   79871 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18859644s)
	I0814 17:37:14.344162   79871 crio.go:469] duration metric: took 2.188726026s to extract the tarball
	I0814 17:37:14.344173   79871 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:14.380317   79871 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:14.428289   79871 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:37:14.428312   79871 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:37:14.428326   79871 kubeadm.go:934] updating node { 192.168.50.184 8444 v1.31.0 crio true true} ...
	I0814 17:37:14.428422   79871 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-885666 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:14.428491   79871 ssh_runner.go:195] Run: crio config
	I0814 17:37:14.475385   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:37:14.475416   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:14.475433   79871 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:14.475464   79871 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-885666 NodeName:default-k8s-diff-port-885666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:37:14.475645   79871 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-885666"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:14.475712   79871 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:37:14.485148   79871 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:14.485206   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:14.494161   79871 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0814 17:37:14.511050   79871 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:14.526395   79871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0814 17:37:14.543061   79871 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:14.546747   79871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:14.558022   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:14.671818   79871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:14.688541   79871 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666 for IP: 192.168.50.184
	I0814 17:37:14.688583   79871 certs.go:194] generating shared ca certs ...
	I0814 17:37:14.688609   79871 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:14.688823   79871 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:14.688889   79871 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:14.688903   79871 certs.go:256] generating profile certs ...
	I0814 17:37:14.689020   79871 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/client.key
	I0814 17:37:14.689132   79871 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.key.690c84bc
	I0814 17:37:14.689182   79871 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.key
	I0814 17:37:14.689310   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:14.689367   79871 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:14.689385   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:14.689422   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:14.689453   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:14.689479   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:14.689524   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:14.690168   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:14.717906   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:14.759373   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:14.809775   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:14.834875   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 17:37:14.857860   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:14.886813   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:14.909803   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:14.935075   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:14.959759   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:14.985877   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:15.008456   79871 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:15.025602   79871 ssh_runner.go:195] Run: openssl version
	I0814 17:37:15.031392   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:15.041931   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.046475   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.046531   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.052377   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:15.063000   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:15.073463   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.078411   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.078471   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.083835   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:15.093753   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:15.103876   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.108487   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.108559   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.114104   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:15.124285   79871 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:15.128515   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:15.134223   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:15.139700   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:15.145537   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:15.151287   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:15.156766   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:15.162149   79871 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-885666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:15.162256   79871 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:15.162314   79871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:15.198745   79871 cri.go:89] found id: ""
	I0814 17:37:15.198814   79871 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:15.212198   79871 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:15.212216   79871 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:15.212256   79871 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:15.224275   79871 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:15.225218   79871 kubeconfig.go:125] found "default-k8s-diff-port-885666" server: "https://192.168.50.184:8444"
	I0814 17:37:15.227291   79871 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:15.237448   79871 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.184
	I0814 17:37:15.237494   79871 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:15.237509   79871 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:15.237563   79871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:15.281593   79871 cri.go:89] found id: ""
	I0814 17:37:15.281662   79871 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:15.298596   79871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:15.308702   79871 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:15.308723   79871 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:15.308779   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 17:37:15.318348   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:15.318409   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:15.330049   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 17:37:15.341283   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:15.341373   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:15.350584   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 17:37:15.361658   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:15.361718   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:15.373526   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 17:37:15.382360   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:15.382432   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:15.392477   79871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:15.402387   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:15.528954   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:11.580887   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.581466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.581500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.581392   81116 retry.go:31] will retry after 514.448726ms: waiting for machine to come up
	I0814 17:37:12.098137   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.098652   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.098740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.098642   81116 retry.go:31] will retry after 649.302617ms: waiting for machine to come up
	I0814 17:37:12.749349   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.749777   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.749803   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.749736   81116 retry.go:31] will retry after 897.486278ms: waiting for machine to come up
	I0814 17:37:13.649145   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:13.649666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:13.649698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:13.649621   81116 retry.go:31] will retry after 1.017213079s: waiting for machine to come up
	I0814 17:37:14.669187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:14.669715   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:14.669740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:14.669679   81116 retry.go:31] will retry after 1.014709613s: waiting for machine to come up
	I0814 17:37:15.685748   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:15.686269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:15.686299   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:15.686217   81116 retry.go:31] will retry after 1.476940798s: waiting for machine to come up
	I0814 17:37:15.422067   79521 pod_ready.go:102] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:16.421689   79521 pod_ready.go:92] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.421715   79521 pod_ready.go:81] duration metric: took 5.007451471s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.421724   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.426620   79521 pod_ready.go:92] pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.426644   79521 pod_ready.go:81] duration metric: took 4.912475ms for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.426657   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.430754   79521 pod_ready.go:92] pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.430776   79521 pod_ready.go:81] duration metric: took 4.110475ms for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.430787   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.434469   79521 pod_ready.go:92] pod "kube-proxy-z8x9t" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.434487   79521 pod_ready.go:81] duration metric: took 3.693253ms for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.434498   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.438294   79521 pod_ready.go:92] pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.438314   79521 pod_ready.go:81] duration metric: took 3.80298ms for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.438346   79521 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:18.445838   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:16.453075   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.676680   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.741803   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.831091   79871 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:16.831186   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:17.332193   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:17.831346   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:18.331620   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:18.832011   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:19.331528   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:19.348083   79871 api_server.go:72] duration metric: took 2.516990388s to wait for apiserver process to appear ...
	I0814 17:37:19.348119   79871 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:37:19.348144   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:17.164541   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:17.165093   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:17.165122   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:17.165017   81116 retry.go:31] will retry after 1.644726601s: waiting for machine to come up
	I0814 17:37:18.811628   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:18.812199   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:18.812224   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:18.812132   81116 retry.go:31] will retry after 2.740531885s: waiting for machine to come up
	I0814 17:37:21.576628   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:21.576657   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:21.576672   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:21.601355   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:21.601389   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:21.848481   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:21.855499   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:21.855530   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:22.349158   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:22.353345   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:22.353368   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:22.848954   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:22.853912   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 200:
	ok
	I0814 17:37:22.865096   79871 api_server.go:141] control plane version: v1.31.0
	I0814 17:37:22.865127   79871 api_server.go:131] duration metric: took 3.516999004s to wait for apiserver health ...
	I0814 17:37:22.865138   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:37:22.865146   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:22.866812   79871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:37:20.446123   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:22.446518   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:24.945729   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:22.867939   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:37:22.881586   79871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:37:22.899815   79871 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:37:22.910873   79871 system_pods.go:59] 8 kube-system pods found
	I0814 17:37:22.910928   79871 system_pods.go:61] "coredns-6f6b679f8f-mxc9v" [d1b9d422-faff-4709-a375-f8783e75e18c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:37:22.910946   79871 system_pods.go:61] "etcd-default-k8s-diff-port-885666" [a5473465-a1c1-4413-8e77-74fb1eb398a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:37:22.910956   79871 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-885666" [06c53e48-b156-42b1-b381-818f75821196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:37:22.910966   79871 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-885666" [18a2d7fb-4e18-4880-8812-63d25934699b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:37:22.910977   79871 system_pods.go:61] "kube-proxy-4rrff" [14453cc8-da7d-4dd4-b7fa-89a26dbbf23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:37:22.910995   79871 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-885666" [f0455f16-9a3e-4ede-8524-f701b1ab4ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:37:22.911005   79871 system_pods.go:61] "metrics-server-6867b74b74-qtzm8" [04c797ec-2e38-42a7-a023-5f60c451f780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:37:22.911020   79871 system_pods.go:61] "storage-provisioner" [88c2e8f0-0706-494a-8e83-0ede8f129040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:37:22.911032   79871 system_pods.go:74] duration metric: took 11.192968ms to wait for pod list to return data ...
	I0814 17:37:22.911044   79871 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:37:22.915096   79871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:37:22.915128   79871 node_conditions.go:123] node cpu capacity is 2
	I0814 17:37:22.915140   79871 node_conditions.go:105] duration metric: took 4.087198ms to run NodePressure ...
	I0814 17:37:22.915165   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:23.204612   79871 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:37:23.209643   79871 kubeadm.go:739] kubelet initialised
	I0814 17:37:23.209665   79871 kubeadm.go:740] duration metric: took 5.023123ms waiting for restarted kubelet to initialise ...
	I0814 17:37:23.209673   79871 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:23.215957   79871 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.221969   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.221993   79871 pod_ready.go:81] duration metric: took 6.011053ms for pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.222008   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.222014   79871 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.227119   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.227147   79871 pod_ready.go:81] duration metric: took 5.125006ms for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.227157   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.227163   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.231297   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.231321   79871 pod_ready.go:81] duration metric: took 4.149023ms for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.231346   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.231355   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:25.239956   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:21.555057   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:21.555530   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:21.555562   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:21.555484   81116 retry.go:31] will retry after 3.159225533s: waiting for machine to come up
	I0814 17:37:24.716173   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:24.716482   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:24.716507   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:24.716451   81116 retry.go:31] will retry after 3.32732131s: waiting for machine to come up
	I0814 17:37:29.512066   79367 start.go:364] duration metric: took 55.26941078s to acquireMachinesLock for "no-preload-545149"
	I0814 17:37:29.512115   79367 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:29.512123   79367 fix.go:54] fixHost starting: 
	I0814 17:37:29.512539   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:29.512574   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:29.529625   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0814 17:37:29.530074   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:29.530558   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:37:29.530585   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:29.530930   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:29.531149   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:29.531291   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:37:29.532999   79367 fix.go:112] recreateIfNeeded on no-preload-545149: state=Stopped err=<nil>
	I0814 17:37:29.533049   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	W0814 17:37:29.533224   79367 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:29.535000   79367 out.go:177] * Restarting existing kvm2 VM for "no-preload-545149" ...
	I0814 17:37:27.445398   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:29.945246   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:27.737698   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:29.737890   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:28.045690   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046151   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has current primary IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046177   80228 main.go:141] libmachine: (old-k8s-version-505584) Found IP for machine: 192.168.72.49
	I0814 17:37:28.046192   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserving static IP address...
	I0814 17:37:28.046500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.046524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | skip adding static IP to network mk-old-k8s-version-505584 - found existing host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"}
	I0814 17:37:28.046540   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserved static IP address: 192.168.72.49
	I0814 17:37:28.046559   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting for SSH to be available...
	I0814 17:37:28.046571   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Getting to WaitForSSH function...
	I0814 17:37:28.048709   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049082   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.049106   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049252   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH client type: external
	I0814 17:37:28.049285   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa (-rw-------)
	I0814 17:37:28.049325   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:28.049342   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | About to run SSH command:
	I0814 17:37:28.049356   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | exit 0
	I0814 17:37:28.179844   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:28.180193   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetConfigRaw
	I0814 17:37:28.180865   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.183617   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184074   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.184118   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184367   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:37:28.184641   80228 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:28.184663   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:28.184891   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.187158   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187517   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.187547   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187696   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.187857   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188027   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188178   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.188320   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.188570   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.188587   80228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:28.303564   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:28.303597   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.303831   80228 buildroot.go:166] provisioning hostname "old-k8s-version-505584"
	I0814 17:37:28.303856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.304021   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.306826   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307180   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.307210   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307415   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.307608   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307769   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307915   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.308131   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.308336   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.308354   80228 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-505584 && echo "old-k8s-version-505584" | sudo tee /etc/hostname
	I0814 17:37:28.434224   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-505584
	
	I0814 17:37:28.434261   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.437350   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437633   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.437666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.438077   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438245   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438395   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.438623   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.438832   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.438857   80228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-505584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-505584/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-505584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:28.564784   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:28.564815   80228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:28.564858   80228 buildroot.go:174] setting up certificates
	I0814 17:37:28.564872   80228 provision.go:84] configureAuth start
	I0814 17:37:28.564884   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.565188   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.568217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.568698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.568731   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.569013   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.571364   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571780   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.571805   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571961   80228 provision.go:143] copyHostCerts
	I0814 17:37:28.572023   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:28.572032   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:28.572076   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:28.572176   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:28.572184   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:28.572206   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:28.572275   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:28.572284   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:28.572337   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:28.572435   80228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-505584 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-505584]
	I0814 17:37:28.804798   80228 provision.go:177] copyRemoteCerts
	I0814 17:37:28.804853   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:28.804879   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.807967   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.808302   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808458   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.808690   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.808874   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.809001   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:28.900346   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:28.926959   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 17:37:28.955373   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:28.984436   80228 provision.go:87] duration metric: took 419.552519ms to configureAuth
	I0814 17:37:28.984463   80228 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:28.984630   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:37:28.984713   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.987602   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988077   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.988107   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988237   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.988486   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988641   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988768   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.988986   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.989209   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.989234   80228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:29.262630   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:29.262656   80228 machine.go:97] duration metric: took 1.078000469s to provisionDockerMachine
	I0814 17:37:29.262669   80228 start.go:293] postStartSetup for "old-k8s-version-505584" (driver="kvm2")
	I0814 17:37:29.262683   80228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:29.262704   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.263051   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:29.263082   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.266020   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.266495   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266720   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.266919   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.267093   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.267253   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.354027   80228 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:29.358196   80228 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:29.358224   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:29.358304   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:29.358416   80228 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:29.358543   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:29.367802   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:29.392802   80228 start.go:296] duration metric: took 130.117007ms for postStartSetup
	I0814 17:37:29.392846   80228 fix.go:56] duration metric: took 20.068754346s for fixHost
	I0814 17:37:29.392871   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.395638   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396032   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.396064   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396251   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.396516   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396698   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396893   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.397155   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:29.397326   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:29.397340   80228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:29.511889   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657049.468340520
	
	I0814 17:37:29.511913   80228 fix.go:216] guest clock: 1723657049.468340520
	I0814 17:37:29.511923   80228 fix.go:229] Guest: 2024-08-14 17:37:29.46834052 +0000 UTC Remote: 2024-08-14 17:37:29.392851248 +0000 UTC m=+223.104093144 (delta=75.489272ms)
	I0814 17:37:29.511983   80228 fix.go:200] guest clock delta is within tolerance: 75.489272ms
	I0814 17:37:29.511996   80228 start.go:83] releasing machines lock for "old-k8s-version-505584", held for 20.187937886s
	I0814 17:37:29.512031   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.512333   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:29.515152   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515487   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.515524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515735   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516299   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516497   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516643   80228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:29.516723   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.516727   80228 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:29.516752   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.519600   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.519751   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520017   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520045   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520164   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520192   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520341   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520423   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520520   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520588   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520646   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520718   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.520780   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.642824   80228 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:29.648834   80228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:29.795482   80228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:29.801407   80228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:29.801486   80228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:29.821662   80228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:29.821684   80228 start.go:495] detecting cgroup driver to use...
	I0814 17:37:29.821761   80228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:29.843829   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:29.859505   80228 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:29.859590   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:29.873790   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:29.889295   80228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:30.035909   80228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:30.209521   80228 docker.go:233] disabling docker service ...
	I0814 17:37:30.209574   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:30.226980   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:30.241678   80228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:30.375116   80228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:30.498357   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:30.512272   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:30.533062   80228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 17:37:30.533130   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.543595   80228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:30.543664   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.554139   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.564417   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.574627   80228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:30.584957   80228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:30.594667   80228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:30.594720   80228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:30.606826   80228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:30.621990   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:30.758992   80228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:30.915494   80228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:30.915572   80228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:30.920692   80228 start.go:563] Will wait 60s for crictl version
	I0814 17:37:30.920767   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:30.924365   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:30.964662   80228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:30.964756   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:30.995534   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:31.025400   80228 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 17:37:31.026943   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:31.030217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030630   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:31.030665   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030943   80228 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:31.034960   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:31.047742   80228 kubeadm.go:883] updating cluster {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:31.047864   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:37:31.047926   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:31.092203   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:31.092278   80228 ssh_runner.go:195] Run: which lz4
	I0814 17:37:31.096471   80228 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:37:31.100610   80228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:31.100642   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 17:37:29.536310   79367 main.go:141] libmachine: (no-preload-545149) Calling .Start
	I0814 17:37:29.536513   79367 main.go:141] libmachine: (no-preload-545149) Ensuring networks are active...
	I0814 17:37:29.537431   79367 main.go:141] libmachine: (no-preload-545149) Ensuring network default is active
	I0814 17:37:29.537935   79367 main.go:141] libmachine: (no-preload-545149) Ensuring network mk-no-preload-545149 is active
	I0814 17:37:29.538468   79367 main.go:141] libmachine: (no-preload-545149) Getting domain xml...
	I0814 17:37:29.539383   79367 main.go:141] libmachine: (no-preload-545149) Creating domain...
	I0814 17:37:30.863155   79367 main.go:141] libmachine: (no-preload-545149) Waiting to get IP...
	I0814 17:37:30.864257   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:30.864722   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:30.864812   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:30.864695   81248 retry.go:31] will retry after 244.342973ms: waiting for machine to come up
	I0814 17:37:31.111211   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.111784   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.111806   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.111735   81248 retry.go:31] will retry after 277.033145ms: waiting for machine to come up
	I0814 17:37:31.390071   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.390511   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.390554   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.390429   81248 retry.go:31] will retry after 320.225451ms: waiting for machine to come up
	I0814 17:37:31.949069   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:34.445833   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:31.741110   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:33.239418   79871 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:33.239449   79871 pod_ready.go:81] duration metric: took 10.008084028s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.239462   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4rrff" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.244600   79871 pod_ready.go:92] pod "kube-proxy-4rrff" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:33.244628   79871 pod_ready.go:81] duration metric: took 5.157296ms for pod "kube-proxy-4rrff" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.244648   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:35.253613   79871 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:35.253643   79871 pod_ready.go:81] duration metric: took 2.008985731s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:35.253657   79871 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:32.582064   80228 crio.go:462] duration metric: took 1.485645107s to copy over tarball
	I0814 17:37:32.582151   80228 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:35.556765   80228 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.974581109s)
	I0814 17:37:35.556795   80228 crio.go:469] duration metric: took 2.9747s to extract the tarball
	I0814 17:37:35.556802   80228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:35.599129   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:35.632752   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:35.632775   80228 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:35.632831   80228 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.632864   80228 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.632892   80228 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 17:37:35.632911   80228 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.632944   80228 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.633112   80228 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634793   80228 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.634821   80228 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 17:37:35.634824   80228 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634885   80228 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.634910   80228 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.635009   80228 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.635082   80228 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.635265   80228 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.905566   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 17:37:35.953168   80228 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 17:37:35.953210   80228 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 17:37:35.953260   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:35.957961   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:35.978859   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.978920   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.988556   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.993281   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.997933   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.018501   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.043527   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.146739   80228 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 17:37:36.146812   80228 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 17:37:36.146832   80228 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.146852   80228 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.146881   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.146891   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163832   80228 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 17:37:36.163856   80228 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 17:37:36.163877   80228 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.163889   80228 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.163923   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163924   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163927   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.172482   80228 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 17:37:36.172530   80228 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.172599   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.195157   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.195208   80228 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 17:37:36.195165   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.195242   80228 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.195245   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.195277   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.237454   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 17:37:36.237519   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.237549   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.292614   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.306771   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.306794   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:31.712067   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.712601   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.712630   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.712575   81248 retry.go:31] will retry after 546.687472ms: waiting for machine to come up
	I0814 17:37:32.261457   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:32.261921   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:32.261950   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:32.261854   81248 retry.go:31] will retry after 484.345236ms: waiting for machine to come up
	I0814 17:37:32.747475   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:32.748118   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:32.748149   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:32.748060   81248 retry.go:31] will retry after 899.564198ms: waiting for machine to come up
	I0814 17:37:33.649684   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:33.650206   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:33.650234   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:33.650155   81248 retry.go:31] will retry after 1.039934932s: waiting for machine to come up
	I0814 17:37:34.691741   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:34.692197   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:34.692220   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:34.692169   81248 retry.go:31] will retry after 925.402437ms: waiting for machine to come up
	I0814 17:37:35.618737   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:35.619169   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:35.619200   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:35.619102   81248 retry.go:31] will retry after 1.401066913s: waiting for machine to come up
	I0814 17:37:36.447039   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:38.945321   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:37.260912   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:39.759967   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:36.321893   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.339836   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.339929   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.426588   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.426659   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.433149   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.469717   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:36.477512   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.477583   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.477761   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.538635   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 17:37:36.557712   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 17:37:36.558304   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.700263   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 17:37:36.700333   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 17:37:36.700410   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 17:37:36.700481   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 17:37:36.700527   80228 cache_images.go:92] duration metric: took 1.067740607s to LoadCachedImages
	W0814 17:37:36.700602   80228 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 17:37:36.700623   80228 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0814 17:37:36.700757   80228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-505584 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:36.700846   80228 ssh_runner.go:195] Run: crio config
	I0814 17:37:36.748814   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:37:36.748843   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:36.748860   80228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:36.748885   80228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-505584 NodeName:old-k8s-version-505584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 17:37:36.749053   80228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-505584"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:36.749129   80228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 17:37:36.760058   80228 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:36.760131   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:36.769388   80228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0814 17:37:36.786594   80228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:36.807695   80228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0814 17:37:36.825609   80228 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:36.829296   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:36.841882   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:36.976199   80228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:36.993682   80228 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584 for IP: 192.168.72.49
	I0814 17:37:36.993707   80228 certs.go:194] generating shared ca certs ...
	I0814 17:37:36.993728   80228 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:36.993924   80228 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:36.993985   80228 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:36.993998   80228 certs.go:256] generating profile certs ...
	I0814 17:37:36.994115   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.key
	I0814 17:37:36.994206   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key.c375770f
	I0814 17:37:36.994261   80228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key
	I0814 17:37:36.994428   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:36.994478   80228 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:36.994492   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:36.994522   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:36.994557   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:36.994603   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:36.994661   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:36.995558   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:37.043910   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:37.073810   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:37.097939   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:37.124449   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 17:37:37.154747   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:37.179474   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:37.204471   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:37.228579   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:37.266929   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:37.292912   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:37.316803   80228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:37.332934   80228 ssh_runner.go:195] Run: openssl version
	I0814 17:37:37.339316   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:37.349829   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354230   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354297   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.360089   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:37.371417   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:37.381777   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385894   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385955   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.391826   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:37.402049   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:37.412038   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416395   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416448   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.421794   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:37.431868   80228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:37.436305   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:37.442838   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:37.448991   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:37.454769   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:37.460381   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:37.466406   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:37.472466   80228 kubeadm.go:392] StartCluster: {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:37.472584   80228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:37.472636   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.508256   80228 cri.go:89] found id: ""
	I0814 17:37:37.508323   80228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:37.518824   80228 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:37.518856   80228 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:37.518941   80228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:37.529328   80228 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:37.530242   80228 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-505584" does not appear in /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:37.530890   80228 kubeconfig.go:62] /home/jenkins/minikube-integration/19446-13977/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-505584" cluster setting kubeconfig missing "old-k8s-version-505584" context setting]
	I0814 17:37:37.531922   80228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:37.539843   80228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:37.550012   80228 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.49
	I0814 17:37:37.550051   80228 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:37.550063   80228 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:37.550113   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.590226   80228 cri.go:89] found id: ""
	I0814 17:37:37.590307   80228 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:37.606242   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:37.615340   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:37.615377   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:37.615436   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:37:37.623996   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:37.624063   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:37.633356   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:37:37.642888   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:37.642958   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:37.652532   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.661607   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:37.661679   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.670876   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:37:37.679780   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:37.679846   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:37.690044   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:37.699617   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:37.813799   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.666487   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.901307   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.029983   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.139056   80228 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:39.139133   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:39.639191   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.139315   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.639292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:41.139421   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:37.021766   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:37.022253   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:37.022282   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:37.022216   81248 retry.go:31] will retry after 2.184222941s: waiting for machine to come up
	I0814 17:37:39.209777   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:39.210239   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:39.210265   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:39.210203   81248 retry.go:31] will retry after 2.903962154s: waiting for machine to come up
	I0814 17:37:41.445413   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:43.949816   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:41.760985   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:44.260273   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:41.639312   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.139387   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.639981   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.139499   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.639391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.139425   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.639677   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.139466   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.639426   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:46.140065   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.116682   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:42.117116   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:42.117154   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:42.117086   81248 retry.go:31] will retry after 3.387467992s: waiting for machine to come up
	I0814 17:37:45.505680   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:45.506034   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:45.506056   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:45.505986   81248 retry.go:31] will retry after 2.944973353s: waiting for machine to come up
	I0814 17:37:46.443772   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:48.445046   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:46.759297   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:49.260881   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:46.640043   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.139213   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.639848   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.140080   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.639961   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.139473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.639212   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.139781   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.640028   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.140140   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.452516   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.453064   79367 main.go:141] libmachine: (no-preload-545149) Found IP for machine: 192.168.39.162
	I0814 17:37:48.453092   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has current primary IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.453099   79367 main.go:141] libmachine: (no-preload-545149) Reserving static IP address...
	I0814 17:37:48.453513   79367 main.go:141] libmachine: (no-preload-545149) Reserved static IP address: 192.168.39.162
	I0814 17:37:48.453564   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "no-preload-545149", mac: "52:54:00:d0:bd:d7", ip: "192.168.39.162"} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.453578   79367 main.go:141] libmachine: (no-preload-545149) Waiting for SSH to be available...
	I0814 17:37:48.453608   79367 main.go:141] libmachine: (no-preload-545149) DBG | skip adding static IP to network mk-no-preload-545149 - found existing host DHCP lease matching {name: "no-preload-545149", mac: "52:54:00:d0:bd:d7", ip: "192.168.39.162"}
	I0814 17:37:48.453630   79367 main.go:141] libmachine: (no-preload-545149) DBG | Getting to WaitForSSH function...
	I0814 17:37:48.455959   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.456279   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.456304   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.456429   79367 main.go:141] libmachine: (no-preload-545149) DBG | Using SSH client type: external
	I0814 17:37:48.456449   79367 main.go:141] libmachine: (no-preload-545149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa (-rw-------)
	I0814 17:37:48.456490   79367 main.go:141] libmachine: (no-preload-545149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:48.456506   79367 main.go:141] libmachine: (no-preload-545149) DBG | About to run SSH command:
	I0814 17:37:48.456520   79367 main.go:141] libmachine: (no-preload-545149) DBG | exit 0
	I0814 17:37:48.579489   79367 main.go:141] libmachine: (no-preload-545149) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:48.579924   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetConfigRaw
	I0814 17:37:48.580615   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:48.583202   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.583545   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.583592   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.583857   79367 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/config.json ...
	I0814 17:37:48.584093   79367 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:48.584113   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:48.584340   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.586712   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.587086   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.587107   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.587259   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.587441   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.587593   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.587720   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.587869   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.588029   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.588040   79367 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:48.691255   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:48.691290   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.691555   79367 buildroot.go:166] provisioning hostname "no-preload-545149"
	I0814 17:37:48.691593   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.691798   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.694492   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.694768   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.694797   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.694907   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.695084   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.695248   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.695396   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.695556   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.695777   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.695798   79367 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-545149 && echo "no-preload-545149" | sudo tee /etc/hostname
	I0814 17:37:48.813509   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-545149
	
	I0814 17:37:48.813537   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.816304   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.816698   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.816732   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.816884   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.817057   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.817265   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.817393   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.817586   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.817809   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.817836   79367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-545149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-545149/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-545149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:48.927482   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:48.927512   79367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:48.927540   79367 buildroot.go:174] setting up certificates
	I0814 17:37:48.927551   79367 provision.go:84] configureAuth start
	I0814 17:37:48.927567   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.927831   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:48.930532   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.930879   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.930906   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.931104   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.933420   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.933754   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.933783   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.933893   79367 provision.go:143] copyHostCerts
	I0814 17:37:48.933968   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:48.933979   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:48.934040   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:48.934146   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:48.934156   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:48.934186   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:48.934262   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:48.934271   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:48.934302   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:48.934377   79367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.no-preload-545149 san=[127.0.0.1 192.168.39.162 localhost minikube no-preload-545149]
	I0814 17:37:49.287517   79367 provision.go:177] copyRemoteCerts
	I0814 17:37:49.287580   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:49.287607   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.290280   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.290667   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.290690   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.290856   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.291063   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.291180   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.291304   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.374716   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:49.398652   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:37:49.422885   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:37:49.448774   79367 provision.go:87] duration metric: took 521.207251ms to configureAuth
	I0814 17:37:49.448800   79367 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:49.448972   79367 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:49.449064   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.452034   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.452373   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.452403   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.452604   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.452859   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.453058   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.453217   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.453388   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:49.453579   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:49.453601   79367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:49.711896   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:49.711922   79367 machine.go:97] duration metric: took 1.127817152s to provisionDockerMachine
	I0814 17:37:49.711933   79367 start.go:293] postStartSetup for "no-preload-545149" (driver="kvm2")
	I0814 17:37:49.711942   79367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:49.711977   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.712299   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:49.712324   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.714736   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.715059   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.715097   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.715232   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.715428   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.715616   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.715769   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.797746   79367 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:49.801764   79367 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:49.801794   79367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:49.801863   79367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:49.801960   79367 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:49.802081   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:49.811506   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:49.834762   79367 start.go:296] duration metric: took 122.81358ms for postStartSetup
	I0814 17:37:49.834812   79367 fix.go:56] duration metric: took 20.32268926s for fixHost
	I0814 17:37:49.834837   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.837418   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.837739   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.837768   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.837903   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.838114   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.838292   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.838438   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.838643   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:49.838838   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:49.838850   79367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:49.944936   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657069.919883473
	
	I0814 17:37:49.944965   79367 fix.go:216] guest clock: 1723657069.919883473
	I0814 17:37:49.944975   79367 fix.go:229] Guest: 2024-08-14 17:37:49.919883473 +0000 UTC Remote: 2024-08-14 17:37:49.834818813 +0000 UTC m=+358.184638535 (delta=85.06466ms)
	I0814 17:37:49.945005   79367 fix.go:200] guest clock delta is within tolerance: 85.06466ms
	I0814 17:37:49.945017   79367 start.go:83] releasing machines lock for "no-preload-545149", held for 20.432923283s
	I0814 17:37:49.945044   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.945291   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:49.947847   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.948269   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.948295   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.948500   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949082   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949262   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949347   79367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:49.949406   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.949517   79367 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:49.949541   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.952281   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952328   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952667   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.952692   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952833   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.952836   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.952895   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.953037   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.953075   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.953201   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.953212   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.953400   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.953412   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.953543   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:50.072094   79367 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:50.080210   79367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:50.227736   79367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:50.233533   79367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:50.233597   79367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:50.249452   79367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:50.249474   79367 start.go:495] detecting cgroup driver to use...
	I0814 17:37:50.249552   79367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:50.265740   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:50.278769   79367 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:50.278833   79367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:50.291625   79367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:50.304529   79367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:50.415405   79367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:50.556016   79367 docker.go:233] disabling docker service ...
	I0814 17:37:50.556092   79367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:50.570197   79367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:50.583068   79367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:50.721414   79367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:50.850890   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:50.864530   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:50.882021   79367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:37:50.882097   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.891490   79367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:50.891564   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.901437   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.911316   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.920935   79367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:50.930571   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.940106   79367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.957351   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.967222   79367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:50.976120   79367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:50.976170   79367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:50.990922   79367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:51.000086   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:51.116655   79367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:51.246182   79367 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:51.246265   79367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:51.250838   79367 start.go:563] Will wait 60s for crictl version
	I0814 17:37:51.250900   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.254633   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:51.299890   79367 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:51.299992   79367 ssh_runner.go:195] Run: crio --version
	I0814 17:37:51.328292   79367 ssh_runner.go:195] Run: crio --version
	I0814 17:37:51.360415   79367 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:37:51.361536   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:51.364443   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:51.364884   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:51.364914   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:51.365112   79367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:51.368941   79367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:51.380519   79367 kubeadm.go:883] updating cluster {Name:no-preload-545149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:51.380668   79367 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:37:51.380705   79367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:51.413314   79367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:37:51.413346   79367 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:51.413417   79367 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.413435   79367 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.413452   79367 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.413395   79367 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:51.413473   79367 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 17:37:51.413440   79367 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.413521   79367 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.413529   79367 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.414920   79367 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:51.414940   79367 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 17:37:51.414983   79367 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.415006   79367 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.415010   79367 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.414982   79367 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.415070   79367 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.415100   79367 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.664642   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.686463   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:50.445457   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:52.945915   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:51.762809   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:54.259593   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:51.639969   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.139918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.639403   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.139921   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.640224   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.140272   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.639242   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.139908   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.639233   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:56.139955   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.699627   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0814 17:37:51.718031   79367 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0814 17:37:51.718085   79367 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.718133   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.736370   79367 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0814 17:37:51.736408   79367 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.736454   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.779229   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.800986   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.819343   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.841240   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.853614   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.853650   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.853753   79367 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0814 17:37:51.853798   79367 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.853842   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.866717   79367 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0814 17:37:51.866757   79367 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.866807   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.908593   79367 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0814 17:37:51.908644   79367 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.908701   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.936701   79367 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0814 17:37:51.936737   79367 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.936784   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.944882   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.944962   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.944983   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.945008   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.945070   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.945089   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.063281   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:52.080543   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:52.080556   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.080574   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:52.080629   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:52.080647   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:52.126573   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:52.205600   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:52.205623   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.236617   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 17:37:52.236678   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:52.236757   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.237083   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 17:37:52.237161   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:52.238804   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0814 17:37:52.238891   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:52.294945   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 17:37:52.295018   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 17:37:52.295064   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:37:52.295103   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0814 17:37:52.295127   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.295189   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.295110   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:37:52.302365   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 17:37:52.302388   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0814 17:37:52.302423   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0814 17:37:52.302472   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:37:52.306933   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0814 17:37:52.307107   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0814 17:37:52.309298   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:54.271998   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.976780716s)
	I0814 17:37:54.272032   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0814 17:37:54.272053   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:54.272063   79367 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.962736886s)
	I0814 17:37:54.272100   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:54.271998   79367 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (1.969503874s)
	I0814 17:37:54.272150   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0814 17:37:54.272105   79367 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0814 17:37:54.272192   79367 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:54.272250   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:56.021236   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.749108117s)
	I0814 17:37:56.021281   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0814 17:37:56.021288   79367 ssh_runner.go:235] Completed: which crictl: (1.749013682s)
	I0814 17:37:56.021309   79367 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:56.021370   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:56.021386   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:55.445017   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:57.445204   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:59.945329   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:56.260666   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:58.760907   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:56.639799   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.140184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.639918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.139310   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.639393   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.140139   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.639614   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.139472   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.139946   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.830150   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.808753337s)
	I0814 17:37:59.830181   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0814 17:37:59.830205   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:37:59.830208   79367 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.80880721s)
	I0814 17:37:59.830253   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:59.830255   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:38:02.444320   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:04.444667   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:01.260951   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:03.759895   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:01.639422   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.139858   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.639412   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.140047   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.640170   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.139779   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.639728   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.139343   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.640249   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.139448   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.796675   79367 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.966400982s)
	I0814 17:38:01.796690   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.966414051s)
	I0814 17:38:01.796708   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0814 17:38:01.796735   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:38:01.796757   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:38:01.796796   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:38:01.841898   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 17:38:01.841994   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:03.571965   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.775142217s)
	I0814 17:38:03.571991   79367 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.729967853s)
	I0814 17:38:03.572002   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0814 17:38:03.572019   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0814 17:38:03.572028   79367 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:03.572079   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:04.422670   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 17:38:04.422705   79367 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:38:04.422760   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:38:06.277419   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.854632861s)
	I0814 17:38:06.277457   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0814 17:38:06.277488   79367 cache_images.go:123] Successfully loaded all cached images
	I0814 17:38:06.277494   79367 cache_images.go:92] duration metric: took 14.864134758s to LoadCachedImages
	I0814 17:38:06.277504   79367 kubeadm.go:934] updating node { 192.168.39.162 8443 v1.31.0 crio true true} ...
	I0814 17:38:06.277628   79367 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-545149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:38:06.277690   79367 ssh_runner.go:195] Run: crio config
	I0814 17:38:06.337971   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:38:06.337990   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:38:06.337999   79367 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:38:06.338019   79367 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-545149 NodeName:no-preload-545149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:38:06.338148   79367 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-545149"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:38:06.338222   79367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:38:06.348156   79367 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:38:06.348219   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:38:06.356784   79367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0814 17:38:06.372439   79367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:38:06.388610   79367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0814 17:38:06.405084   79367 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I0814 17:38:06.408753   79367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:38:06.420313   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:38:06.546115   79367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:38:06.563747   79367 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149 for IP: 192.168.39.162
	I0814 17:38:06.563776   79367 certs.go:194] generating shared ca certs ...
	I0814 17:38:06.563799   79367 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:38:06.563973   79367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:38:06.564035   79367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:38:06.564058   79367 certs.go:256] generating profile certs ...
	I0814 17:38:06.564150   79367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/client.key
	I0814 17:38:06.564207   79367 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.key.d0704694
	I0814 17:38:06.564241   79367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.key
	I0814 17:38:06.564349   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:38:06.564377   79367 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:38:06.564386   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:38:06.564411   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:38:06.564437   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:38:06.564459   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:38:06.564497   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:38:06.565269   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:38:06.592622   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:38:06.619148   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:38:06.646169   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:38:06.682399   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 17:38:06.446354   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:08.948005   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:05.760991   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:08.260189   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:10.260816   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:06.639416   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.140176   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.639682   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.140063   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.640014   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.139435   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.639256   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.139949   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.640283   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:11.139394   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.714195   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:38:06.750431   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:38:06.772702   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:38:06.793932   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:38:06.815601   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:38:06.837187   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:38:06.858175   79367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:38:06.876187   79367 ssh_runner.go:195] Run: openssl version
	I0814 17:38:06.881909   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:38:06.892057   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.896191   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.896251   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.901630   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:38:06.910888   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:38:06.920223   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.924480   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.924527   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.929591   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:38:06.939072   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:38:06.949970   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.954288   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.954339   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.959551   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:38:06.969505   79367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:38:06.973905   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:38:06.980211   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:38:06.986779   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:38:06.992220   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:38:06.997446   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:38:07.002681   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:38:07.008037   79367 kubeadm.go:392] StartCluster: {Name:no-preload-545149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:38:07.008131   79367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:38:07.008188   79367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:38:07.043144   79367 cri.go:89] found id: ""
	I0814 17:38:07.043214   79367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:38:07.052215   79367 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:38:07.052233   79367 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:38:07.052281   79367 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:38:07.060618   79367 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:38:07.061557   79367 kubeconfig.go:125] found "no-preload-545149" server: "https://192.168.39.162:8443"
	I0814 17:38:07.063554   79367 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:38:07.072026   79367 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I0814 17:38:07.072064   79367 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:38:07.072075   79367 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:38:07.072117   79367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:38:07.109349   79367 cri.go:89] found id: ""
	I0814 17:38:07.109412   79367 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:38:07.126888   79367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:38:07.138272   79367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:38:07.138293   79367 kubeadm.go:157] found existing configuration files:
	
	I0814 17:38:07.138367   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:38:07.147160   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:38:07.147220   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:38:07.156664   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:38:07.165122   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:38:07.165167   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:38:07.173478   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:38:07.181391   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:38:07.181449   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:38:07.189750   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:38:07.198215   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:38:07.198274   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:38:07.207384   79367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:38:07.216034   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:07.337710   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.227720   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.455979   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.521250   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.654574   79367 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:38:08.654684   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.155639   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.655182   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.696193   79367 api_server.go:72] duration metric: took 1.041620068s to wait for apiserver process to appear ...
	I0814 17:38:09.696223   79367 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:38:09.696241   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:09.696703   79367 api_server.go:269] stopped: https://192.168.39.162:8443/healthz: Get "https://192.168.39.162:8443/healthz": dial tcp 192.168.39.162:8443: connect: connection refused
	I0814 17:38:10.197180   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.389673   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:38:12.389702   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:38:12.389717   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.403106   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:38:12.403138   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:38:12.696486   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.700755   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:38:12.700784   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:38:13.196293   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:13.200564   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:38:13.200592   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:38:13.697253   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:13.705430   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I0814 17:38:13.732816   79367 api_server.go:141] control plane version: v1.31.0
	I0814 17:38:13.732843   79367 api_server.go:131] duration metric: took 4.036614106s to wait for apiserver health ...
	I0814 17:38:13.732852   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:38:13.732860   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:38:13.734904   79367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:38:11.444846   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:13.943583   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:12.759294   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:14.760919   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:11.640107   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.140034   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.639463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.139428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.639575   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.140005   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.639473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.140124   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.639459   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:16.139187   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.736533   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:38:13.756650   79367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:38:13.776947   79367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:38:13.803170   79367 system_pods.go:59] 8 kube-system pods found
	I0814 17:38:13.803214   79367 system_pods.go:61] "coredns-6f6b679f8f-tt46z" [169beaf0-0310-47d5-b212-9a81c6b3df68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:38:13.803228   79367 system_pods.go:61] "etcd-no-preload-545149" [47e22bb4-bedb-433f-ae2e-f281269c6e87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:38:13.803240   79367 system_pods.go:61] "kube-apiserver-no-preload-545149" [37854a66-b05b-49fe-834b-98f724087ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:38:13.803249   79367 system_pods.go:61] "kube-controller-manager-no-preload-545149" [69189ec1-6f8c-4613-bf47-46e101a14ecd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:38:13.803307   79367 system_pods.go:61] "kube-proxy-gfrqp" [2206243d-f6e0-462c-969c-60e192038700] Running
	I0814 17:38:13.803314   79367 system_pods.go:61] "kube-scheduler-no-preload-545149" [0bbd41bd-0a18-486b-b78c-9a0e9efe209a] Running
	I0814 17:38:13.803322   79367 system_pods.go:61] "metrics-server-6867b74b74-8c2cx" [b30f3018-f316-4997-a8fa-ff6c83aa7dd7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:38:13.803341   79367 system_pods.go:61] "storage-provisioner" [635027cc-ac5d-4474-a243-ef48b3580998] Running
	I0814 17:38:13.803349   79367 system_pods.go:74] duration metric: took 26.377795ms to wait for pod list to return data ...
	I0814 17:38:13.803357   79367 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:38:13.814093   79367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:38:13.814120   79367 node_conditions.go:123] node cpu capacity is 2
	I0814 17:38:13.814131   79367 node_conditions.go:105] duration metric: took 10.768606ms to run NodePressure ...
	I0814 17:38:13.814147   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:14.196481   79367 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:38:14.202205   79367 kubeadm.go:739] kubelet initialised
	I0814 17:38:14.202239   79367 kubeadm.go:740] duration metric: took 5.723699ms waiting for restarted kubelet to initialise ...
	I0814 17:38:14.202250   79367 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:38:14.209431   79367 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.215568   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.215597   79367 pod_ready.go:81] duration metric: took 6.13175ms for pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.215610   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.215620   79367 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.227611   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "etcd-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.227647   79367 pod_ready.go:81] duration metric: took 12.016107ms for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.227661   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "etcd-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.227669   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.235095   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "kube-apiserver-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.235130   79367 pod_ready.go:81] duration metric: took 7.452418ms for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.235143   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "kube-apiserver-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.235153   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.244417   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.244447   79367 pod_ready.go:81] duration metric: took 9.283911ms for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.244459   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.244466   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gfrqp" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.999946   79367 pod_ready.go:92] pod "kube-proxy-gfrqp" in "kube-system" namespace has status "Ready":"True"
	I0814 17:38:14.999968   79367 pod_ready.go:81] duration metric: took 755.491905ms for pod "kube-proxy-gfrqp" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.999977   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:15.945421   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:18.444758   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:16.761265   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:19.260117   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:16.639219   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.139463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.639839   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.140251   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.639890   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.139999   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.639652   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.139316   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.639809   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:21.139471   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.005796   79367 pod_ready.go:102] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:19.006769   79367 pod_ready.go:102] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:20.506792   79367 pod_ready.go:92] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:38:20.506815   79367 pod_ready.go:81] duration metric: took 5.50683258s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:20.506825   79367 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:20.445449   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:22.446622   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:24.943859   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:21.760870   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:23.761708   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:21.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.139292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.640151   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.139450   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.639996   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.139447   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.639267   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.139595   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.639451   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:26.140190   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.513577   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:25.012936   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.945216   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:29.444769   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.260276   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:28.263789   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.640120   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.140141   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.640184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.139896   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.140246   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.639895   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.139860   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.639358   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:31.140029   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.014354   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:29.516049   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:31.944967   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:34.444885   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:30.760174   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:33.259870   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:35.260137   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:31.639317   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.140039   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.139240   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.640181   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.139789   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.639297   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.139871   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.639347   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:36.140044   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.013464   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:34.513348   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.513741   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.944347   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:38.945374   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:37.759866   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:39.760334   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.640132   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.139254   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.639457   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.139928   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.639196   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:39.139906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:39.139980   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:39.179494   80228 cri.go:89] found id: ""
	I0814 17:38:39.179524   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.179535   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:39.179543   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:39.179619   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:39.210704   80228 cri.go:89] found id: ""
	I0814 17:38:39.210732   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.210741   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:39.210746   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:39.210796   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:39.247562   80228 cri.go:89] found id: ""
	I0814 17:38:39.247590   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.247597   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:39.247603   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:39.247665   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:39.281456   80228 cri.go:89] found id: ""
	I0814 17:38:39.281480   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.281488   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:39.281494   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:39.281553   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:39.318588   80228 cri.go:89] found id: ""
	I0814 17:38:39.318620   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.318630   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:39.318638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:39.318695   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:39.350270   80228 cri.go:89] found id: ""
	I0814 17:38:39.350294   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.350303   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:39.350311   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:39.350387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:39.382168   80228 cri.go:89] found id: ""
	I0814 17:38:39.382198   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.382209   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:39.382215   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:39.382325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:39.415307   80228 cri.go:89] found id: ""
	I0814 17:38:39.415342   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.415354   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:39.415375   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:39.415388   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:39.469591   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:39.469632   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:39.482909   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:39.482942   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:39.609874   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:39.609906   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:39.609921   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:39.683210   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:39.683253   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:39.013876   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:41.513527   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:41.444286   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:43.444539   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:42.260548   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:44.263171   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:42.222687   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:42.235017   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:42.235088   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:42.285518   80228 cri.go:89] found id: ""
	I0814 17:38:42.285544   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.285553   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:42.285559   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:42.285614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:42.320462   80228 cri.go:89] found id: ""
	I0814 17:38:42.320492   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.320500   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:42.320506   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:42.320594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:42.353484   80228 cri.go:89] found id: ""
	I0814 17:38:42.353515   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.353528   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:42.353537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:42.353614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:42.388122   80228 cri.go:89] found id: ""
	I0814 17:38:42.388152   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.388163   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:42.388171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:42.388239   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:42.420246   80228 cri.go:89] found id: ""
	I0814 17:38:42.420275   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.420285   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:42.420293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:42.420359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:42.454636   80228 cri.go:89] found id: ""
	I0814 17:38:42.454669   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.454680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:42.454687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:42.454749   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:42.494638   80228 cri.go:89] found id: ""
	I0814 17:38:42.494670   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.494679   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:42.494686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:42.494751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:42.532224   80228 cri.go:89] found id: ""
	I0814 17:38:42.532257   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.532269   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:42.532281   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:42.532296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:42.546100   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:42.546132   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:42.616561   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:42.616589   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:42.616604   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:42.697269   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:42.697305   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:42.737787   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:42.737821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.293788   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:45.309020   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:45.309080   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:45.349218   80228 cri.go:89] found id: ""
	I0814 17:38:45.349246   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.349254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:45.349260   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:45.349318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:45.387622   80228 cri.go:89] found id: ""
	I0814 17:38:45.387653   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.387664   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:45.387672   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:45.387750   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:45.422120   80228 cri.go:89] found id: ""
	I0814 17:38:45.422154   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.422164   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:45.422169   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:45.422226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:45.457309   80228 cri.go:89] found id: ""
	I0814 17:38:45.457337   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.457352   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:45.457361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:45.457412   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:45.488969   80228 cri.go:89] found id: ""
	I0814 17:38:45.489000   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.489011   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:45.489019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:45.489081   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:45.522230   80228 cri.go:89] found id: ""
	I0814 17:38:45.522258   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.522273   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:45.522280   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:45.522345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:45.555394   80228 cri.go:89] found id: ""
	I0814 17:38:45.555425   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.555440   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:45.555448   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:45.555520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:45.587870   80228 cri.go:89] found id: ""
	I0814 17:38:45.587899   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.587910   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:45.587934   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:45.587951   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.638662   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:45.638709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:45.652217   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:45.652248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:45.733611   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:45.733635   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:45.733648   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:45.822733   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:45.822773   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:44.013405   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:46.014164   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:45.445289   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:47.944672   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:46.760279   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:49.260108   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:48.361519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:48.374848   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:48.374916   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:48.410849   80228 cri.go:89] found id: ""
	I0814 17:38:48.410897   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.410911   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:48.410920   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:48.410986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:48.448507   80228 cri.go:89] found id: ""
	I0814 17:38:48.448530   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.448537   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:48.448543   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:48.448594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:48.486257   80228 cri.go:89] found id: ""
	I0814 17:38:48.486298   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.486306   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:48.486312   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:48.486363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:48.520447   80228 cri.go:89] found id: ""
	I0814 17:38:48.520473   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.520482   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:48.520487   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:48.520544   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:48.552659   80228 cri.go:89] found id: ""
	I0814 17:38:48.552690   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.552698   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:48.552704   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:48.552768   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:48.585302   80228 cri.go:89] found id: ""
	I0814 17:38:48.585331   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.585341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:48.585348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:48.585415   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:48.617388   80228 cri.go:89] found id: ""
	I0814 17:38:48.617417   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.617428   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:48.617435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:48.617503   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:48.658987   80228 cri.go:89] found id: ""
	I0814 17:38:48.659012   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.659019   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:48.659027   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:48.659041   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:48.719882   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:48.719915   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:48.738962   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:48.738994   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:48.807703   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:48.807727   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:48.807739   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:48.886555   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:48.886585   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:48.514199   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.013627   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:50.444135   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:52.444957   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:54.446434   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.760518   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:54.260283   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.423653   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:51.436700   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:51.436792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:51.473198   80228 cri.go:89] found id: ""
	I0814 17:38:51.473227   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.473256   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:51.473262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:51.473311   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:51.508631   80228 cri.go:89] found id: ""
	I0814 17:38:51.508664   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.508675   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:51.508682   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:51.508743   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:51.540917   80228 cri.go:89] found id: ""
	I0814 17:38:51.540950   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.540958   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:51.540965   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:51.541014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:51.578112   80228 cri.go:89] found id: ""
	I0814 17:38:51.578140   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.578150   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:51.578158   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:51.578220   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:51.612664   80228 cri.go:89] found id: ""
	I0814 17:38:51.612692   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.612700   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:51.612706   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:51.612756   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:51.646374   80228 cri.go:89] found id: ""
	I0814 17:38:51.646399   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.646407   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:51.646413   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:51.646463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:51.682052   80228 cri.go:89] found id: ""
	I0814 17:38:51.682081   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.682092   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:51.682098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:51.682147   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:51.722625   80228 cri.go:89] found id: ""
	I0814 17:38:51.722653   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.722663   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:51.722674   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:51.722687   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:51.771788   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:51.771818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:51.785403   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:51.785432   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:51.854249   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:51.854269   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:51.854281   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:51.938121   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:51.938155   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.475672   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:54.491309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:54.491399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:54.524971   80228 cri.go:89] found id: ""
	I0814 17:38:54.525001   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.525011   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:54.525023   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:54.525087   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:54.560631   80228 cri.go:89] found id: ""
	I0814 17:38:54.560661   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.560670   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:54.560675   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:54.560728   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:54.595710   80228 cri.go:89] found id: ""
	I0814 17:38:54.595740   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.595751   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:54.595759   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:54.595824   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:54.631449   80228 cri.go:89] found id: ""
	I0814 17:38:54.631476   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.631487   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:54.631495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:54.631557   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:54.666492   80228 cri.go:89] found id: ""
	I0814 17:38:54.666526   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.666539   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:54.666548   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:54.666617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:54.701111   80228 cri.go:89] found id: ""
	I0814 17:38:54.701146   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.701158   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:54.701166   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:54.701226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:54.737535   80228 cri.go:89] found id: ""
	I0814 17:38:54.737574   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.737585   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:54.737595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:54.737653   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:54.771658   80228 cri.go:89] found id: ""
	I0814 17:38:54.771679   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.771686   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:54.771694   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:54.771709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:54.841798   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:54.841817   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:54.841829   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:54.930861   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:54.930917   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.970508   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:54.970540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:55.023077   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:55.023123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:53.513137   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.014005   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.945376   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:59.445437   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.260436   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:58.759613   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:57.538876   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:57.551796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:57.551868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:57.584576   80228 cri.go:89] found id: ""
	I0814 17:38:57.584601   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.584609   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:57.584617   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:57.584687   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:57.617209   80228 cri.go:89] found id: ""
	I0814 17:38:57.617239   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.617249   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:57.617257   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:57.617338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:57.650062   80228 cri.go:89] found id: ""
	I0814 17:38:57.650089   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.650096   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:57.650102   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:57.650160   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:57.681118   80228 cri.go:89] found id: ""
	I0814 17:38:57.681146   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.681154   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:57.681160   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:57.681228   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:57.713803   80228 cri.go:89] found id: ""
	I0814 17:38:57.713834   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.713842   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:57.713851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:57.713920   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:57.749555   80228 cri.go:89] found id: ""
	I0814 17:38:57.749594   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.749604   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:57.749613   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:57.749677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:57.782714   80228 cri.go:89] found id: ""
	I0814 17:38:57.782744   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.782755   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:57.782763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:57.782826   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:57.815386   80228 cri.go:89] found id: ""
	I0814 17:38:57.815414   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.815423   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:57.815436   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:57.815450   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:57.868153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:57.868183   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:57.881022   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:57.881053   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:57.950474   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:57.950501   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:57.950515   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:58.032611   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:58.032644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:00.569493   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:00.583257   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:00.583384   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:00.614680   80228 cri.go:89] found id: ""
	I0814 17:39:00.614712   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.614723   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:00.614732   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:00.614792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:00.648161   80228 cri.go:89] found id: ""
	I0814 17:39:00.648189   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.648196   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:00.648203   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:00.648256   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:00.681844   80228 cri.go:89] found id: ""
	I0814 17:39:00.681872   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.681883   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:00.681890   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:00.681952   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:00.714773   80228 cri.go:89] found id: ""
	I0814 17:39:00.714804   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.714815   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:00.714823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:00.714891   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:00.747748   80228 cri.go:89] found id: ""
	I0814 17:39:00.747774   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.747781   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:00.747787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:00.747845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:00.783135   80228 cri.go:89] found id: ""
	I0814 17:39:00.783168   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.783179   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:00.783186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:00.783242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:00.817505   80228 cri.go:89] found id: ""
	I0814 17:39:00.817541   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.817552   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:00.817567   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:00.817633   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:00.849205   80228 cri.go:89] found id: ""
	I0814 17:39:00.849231   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.849241   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:00.849252   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:00.849273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:00.902529   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:00.902567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:00.916313   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:00.916346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:00.988708   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:00.988725   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:00.988737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:01.063818   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:01.063853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:58.512313   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:00.513694   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:01.944987   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.945640   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:00.759979   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.259928   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.603241   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:03.616400   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:03.616504   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:03.649580   80228 cri.go:89] found id: ""
	I0814 17:39:03.649619   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.649637   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:03.649650   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:03.649718   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:03.686252   80228 cri.go:89] found id: ""
	I0814 17:39:03.686274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.686282   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:03.686289   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:03.686349   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:03.720995   80228 cri.go:89] found id: ""
	I0814 17:39:03.721024   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.721036   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:03.721043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:03.721094   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:03.753466   80228 cri.go:89] found id: ""
	I0814 17:39:03.753491   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.753500   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:03.753506   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:03.753554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:03.794427   80228 cri.go:89] found id: ""
	I0814 17:39:03.794450   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.794458   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:03.794464   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:03.794524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:03.826245   80228 cri.go:89] found id: ""
	I0814 17:39:03.826274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.826282   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:03.826288   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:03.826355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:03.857208   80228 cri.go:89] found id: ""
	I0814 17:39:03.857238   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.857247   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:03.857253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:03.857325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:03.892840   80228 cri.go:89] found id: ""
	I0814 17:39:03.892864   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.892871   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:03.892879   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:03.892891   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:03.948554   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:03.948579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:03.962222   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:03.962249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:04.031833   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:04.031859   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:04.031875   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:04.109572   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:04.109636   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:03.013542   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:05.513201   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:06.444222   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:08.444833   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:05.759653   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:07.760063   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.259961   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:06.646923   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:06.659699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:06.659757   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:06.691918   80228 cri.go:89] found id: ""
	I0814 17:39:06.691941   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.691951   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:06.691958   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:06.692016   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:06.722675   80228 cri.go:89] found id: ""
	I0814 17:39:06.722703   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.722713   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:06.722720   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:06.722782   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:06.757215   80228 cri.go:89] found id: ""
	I0814 17:39:06.757248   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.757259   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:06.757266   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:06.757333   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:06.791337   80228 cri.go:89] found id: ""
	I0814 17:39:06.791370   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.791378   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:06.791384   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:06.791440   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:06.825182   80228 cri.go:89] found id: ""
	I0814 17:39:06.825209   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.825220   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:06.825234   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:06.825288   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:06.857473   80228 cri.go:89] found id: ""
	I0814 17:39:06.857498   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.857507   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:06.857514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:06.857582   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:06.891293   80228 cri.go:89] found id: ""
	I0814 17:39:06.891343   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.891355   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:06.891363   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:06.891421   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:06.927476   80228 cri.go:89] found id: ""
	I0814 17:39:06.927505   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.927516   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:06.927527   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:06.927541   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:06.980604   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:06.980635   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:06.994648   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:06.994678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:07.072554   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:07.072580   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:07.072599   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:07.153141   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:07.153182   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:09.693348   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:09.705754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:09.705814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:09.739674   80228 cri.go:89] found id: ""
	I0814 17:39:09.739706   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.739717   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:09.739724   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:09.739788   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:09.774381   80228 cri.go:89] found id: ""
	I0814 17:39:09.774405   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.774413   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:09.774420   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:09.774478   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:09.806586   80228 cri.go:89] found id: ""
	I0814 17:39:09.806614   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.806623   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:09.806629   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:09.806696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:09.839564   80228 cri.go:89] found id: ""
	I0814 17:39:09.839594   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.839602   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:09.839614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:09.839672   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:09.872338   80228 cri.go:89] found id: ""
	I0814 17:39:09.872373   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.872385   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:09.872393   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:09.872457   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:09.904184   80228 cri.go:89] found id: ""
	I0814 17:39:09.904223   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.904231   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:09.904253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:09.904312   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:09.937217   80228 cri.go:89] found id: ""
	I0814 17:39:09.937242   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.937251   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:09.937259   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:09.937322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:09.972273   80228 cri.go:89] found id: ""
	I0814 17:39:09.972301   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.972313   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:09.972325   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:09.972341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:10.023736   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:10.023764   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:10.036654   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:10.036681   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:10.104031   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:10.104052   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:10.104068   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:10.187816   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:10.187853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:08.013632   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.513090   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.944491   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.945542   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.260129   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:14.759867   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.727237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:12.741970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:12.742041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:12.778721   80228 cri.go:89] found id: ""
	I0814 17:39:12.778748   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.778758   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:12.778765   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:12.778820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:12.812575   80228 cri.go:89] found id: ""
	I0814 17:39:12.812603   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.812610   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:12.812619   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:12.812678   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:12.845697   80228 cri.go:89] found id: ""
	I0814 17:39:12.845726   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.845737   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:12.845744   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:12.845809   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:12.879491   80228 cri.go:89] found id: ""
	I0814 17:39:12.879518   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.879529   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:12.879536   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:12.879604   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:12.912321   80228 cri.go:89] found id: ""
	I0814 17:39:12.912348   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.912356   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:12.912361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:12.912410   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:12.948866   80228 cri.go:89] found id: ""
	I0814 17:39:12.948889   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.948897   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:12.948903   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:12.948963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:12.983394   80228 cri.go:89] found id: ""
	I0814 17:39:12.983444   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.983459   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:12.983466   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:12.983530   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:13.018406   80228 cri.go:89] found id: ""
	I0814 17:39:13.018427   80228 logs.go:276] 0 containers: []
	W0814 17:39:13.018434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:13.018442   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:13.018457   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.069615   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:13.069655   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:13.082618   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:13.082651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:13.145033   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:13.145054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:13.145067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:13.225074   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:13.225108   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:15.765512   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:15.778320   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:15.778380   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:15.812847   80228 cri.go:89] found id: ""
	I0814 17:39:15.812876   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.812885   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:15.812896   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:15.812944   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:15.845131   80228 cri.go:89] found id: ""
	I0814 17:39:15.845159   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.845169   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:15.845176   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:15.845242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:15.879763   80228 cri.go:89] found id: ""
	I0814 17:39:15.879789   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.879799   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:15.879807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:15.879864   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:15.912746   80228 cri.go:89] found id: ""
	I0814 17:39:15.912776   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.912784   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:15.912797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:15.912858   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:15.946433   80228 cri.go:89] found id: ""
	I0814 17:39:15.946456   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.946465   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:15.946473   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:15.946534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:15.980060   80228 cri.go:89] found id: ""
	I0814 17:39:15.980086   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.980096   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:15.980103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:15.980167   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:16.011539   80228 cri.go:89] found id: ""
	I0814 17:39:16.011570   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.011581   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:16.011590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:16.011660   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:16.046019   80228 cri.go:89] found id: ""
	I0814 17:39:16.046046   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.046057   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:16.046068   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:16.046083   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:16.058442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:16.058470   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:16.132775   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:16.132799   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:16.132811   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:16.218360   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:16.218398   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:16.258070   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:16.258096   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.013275   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:15.013967   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:15.444280   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:17.444827   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.943845   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:16.760773   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.259891   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:18.813127   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:18.826187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:18.826267   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:18.858405   80228 cri.go:89] found id: ""
	I0814 17:39:18.858433   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.858444   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:18.858452   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:18.858524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:18.893302   80228 cri.go:89] found id: ""
	I0814 17:39:18.893335   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.893342   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:18.893350   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:18.893417   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:18.929885   80228 cri.go:89] found id: ""
	I0814 17:39:18.929919   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.929929   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:18.929937   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:18.930000   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:18.966758   80228 cri.go:89] found id: ""
	I0814 17:39:18.966783   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.966792   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:18.966799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:18.966861   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:18.999815   80228 cri.go:89] found id: ""
	I0814 17:39:18.999838   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.999845   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:18.999851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:18.999915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:19.033737   80228 cri.go:89] found id: ""
	I0814 17:39:19.033761   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.033768   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:19.033774   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:19.033830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:19.070080   80228 cri.go:89] found id: ""
	I0814 17:39:19.070105   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.070113   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:19.070119   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:19.070190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:19.102868   80228 cri.go:89] found id: ""
	I0814 17:39:19.102897   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.102907   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:19.102918   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:19.102932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:19.156525   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:19.156569   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:19.170193   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:19.170225   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:19.236521   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:19.236547   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:19.236561   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:19.315984   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:19.316024   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:17.512553   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.513046   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.513082   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:22.444948   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:24.945111   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.260362   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:23.260567   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.855554   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:21.868457   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:21.868527   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:21.902098   80228 cri.go:89] found id: ""
	I0814 17:39:21.902124   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.902132   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:21.902139   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:21.902200   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:21.934876   80228 cri.go:89] found id: ""
	I0814 17:39:21.934908   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.934919   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:21.934926   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:21.934987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:21.976507   80228 cri.go:89] found id: ""
	I0814 17:39:21.976536   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.976548   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:21.976555   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:21.976617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:22.013876   80228 cri.go:89] found id: ""
	I0814 17:39:22.013897   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.013904   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:22.013909   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:22.013955   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:22.051943   80228 cri.go:89] found id: ""
	I0814 17:39:22.051969   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.051979   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:22.051999   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:22.052064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:22.084702   80228 cri.go:89] found id: ""
	I0814 17:39:22.084725   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.084733   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:22.084738   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:22.084784   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:22.117397   80228 cri.go:89] found id: ""
	I0814 17:39:22.117424   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.117432   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:22.117439   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:22.117490   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:22.154139   80228 cri.go:89] found id: ""
	I0814 17:39:22.154168   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.154178   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:22.154189   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:22.154201   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:22.205550   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:22.205580   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:22.219644   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:22.219679   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:22.288934   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:22.288957   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:22.288969   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:22.372917   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:22.372954   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:24.912578   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:24.925365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:24.925430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:24.961207   80228 cri.go:89] found id: ""
	I0814 17:39:24.961234   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.961248   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:24.961257   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:24.961339   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:24.998878   80228 cri.go:89] found id: ""
	I0814 17:39:24.998904   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.998911   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:24.998918   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:24.998971   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:25.034141   80228 cri.go:89] found id: ""
	I0814 17:39:25.034174   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.034187   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:25.034196   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:25.034274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:25.075634   80228 cri.go:89] found id: ""
	I0814 17:39:25.075667   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.075679   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:25.075688   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:25.075759   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:25.112890   80228 cri.go:89] found id: ""
	I0814 17:39:25.112929   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.112939   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:25.112946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:25.113007   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:25.152887   80228 cri.go:89] found id: ""
	I0814 17:39:25.152913   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.152921   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:25.152927   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:25.152987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:25.186421   80228 cri.go:89] found id: ""
	I0814 17:39:25.186452   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.186463   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:25.186471   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:25.186537   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:25.220390   80228 cri.go:89] found id: ""
	I0814 17:39:25.220417   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.220425   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:25.220432   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:25.220446   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:25.296112   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:25.296146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:25.335421   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:25.335449   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:25.387690   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:25.387718   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:25.401926   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:25.401957   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:25.471111   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:24.012534   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:26.513529   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.445280   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:29.445416   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:25.759098   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.759924   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:30.259610   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.972237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:27.985512   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:27.985575   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:28.019454   80228 cri.go:89] found id: ""
	I0814 17:39:28.019482   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.019493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:28.019502   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:28.019566   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:28.056908   80228 cri.go:89] found id: ""
	I0814 17:39:28.056931   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.056939   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:28.056944   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:28.056998   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:28.090678   80228 cri.go:89] found id: ""
	I0814 17:39:28.090707   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.090715   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:28.090721   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:28.090785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:28.125557   80228 cri.go:89] found id: ""
	I0814 17:39:28.125591   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.125609   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:28.125620   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:28.125682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:28.158092   80228 cri.go:89] found id: ""
	I0814 17:39:28.158121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.158129   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:28.158135   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:28.158191   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:28.193403   80228 cri.go:89] found id: ""
	I0814 17:39:28.193434   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.193445   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:28.193454   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:28.193524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:28.231095   80228 cri.go:89] found id: ""
	I0814 17:39:28.231121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.231131   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:28.231139   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:28.231203   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:28.280157   80228 cri.go:89] found id: ""
	I0814 17:39:28.280185   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.280196   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:28.280207   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:28.280220   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:28.352877   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:28.352894   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:28.352906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:28.439692   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:28.439736   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:28.479986   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:28.480012   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:28.538454   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:28.538493   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.052941   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:31.065810   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:31.065879   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:31.097988   80228 cri.go:89] found id: ""
	I0814 17:39:31.098013   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.098020   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:31.098045   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:31.098102   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:31.130126   80228 cri.go:89] found id: ""
	I0814 17:39:31.130152   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.130160   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:31.130166   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:31.130225   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:31.165945   80228 cri.go:89] found id: ""
	I0814 17:39:31.165984   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.165995   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:31.166003   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:31.166064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:31.199749   80228 cri.go:89] found id: ""
	I0814 17:39:31.199772   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.199778   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:31.199784   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:31.199843   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:31.231398   80228 cri.go:89] found id: ""
	I0814 17:39:31.231425   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.231436   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:31.231444   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:31.231528   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:31.263842   80228 cri.go:89] found id: ""
	I0814 17:39:31.263868   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.263878   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:31.263885   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:31.263950   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:31.299258   80228 cri.go:89] found id: ""
	I0814 17:39:31.299289   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.299301   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:31.299309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:31.299399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:29.013468   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.013638   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.445769   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:33.944939   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:32.260117   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:34.262303   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.332626   80228 cri.go:89] found id: ""
	I0814 17:39:31.332649   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.332657   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:31.332666   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:31.332678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:31.369262   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:31.369288   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:31.426003   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:31.426034   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.439583   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:31.439611   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:31.507538   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:31.507563   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:31.507583   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.085342   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:34.097491   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:34.097567   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:34.129220   80228 cri.go:89] found id: ""
	I0814 17:39:34.129244   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.129254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:34.129262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:34.129322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:34.161233   80228 cri.go:89] found id: ""
	I0814 17:39:34.161256   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.161264   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:34.161270   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:34.161334   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:34.193649   80228 cri.go:89] found id: ""
	I0814 17:39:34.193675   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.193683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:34.193689   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:34.193754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:34.226722   80228 cri.go:89] found id: ""
	I0814 17:39:34.226753   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.226763   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:34.226772   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:34.226842   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:34.259735   80228 cri.go:89] found id: ""
	I0814 17:39:34.259761   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.259774   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:34.259787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:34.259850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:34.296804   80228 cri.go:89] found id: ""
	I0814 17:39:34.296830   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.296838   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:34.296844   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:34.296894   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:34.328942   80228 cri.go:89] found id: ""
	I0814 17:39:34.328973   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.328982   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:34.328988   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:34.329041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:34.360820   80228 cri.go:89] found id: ""
	I0814 17:39:34.360847   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.360858   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:34.360868   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:34.360882   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:34.411106   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:34.411142   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:34.424737   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:34.424769   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:34.489094   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:34.489122   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:34.489138   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.569783   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:34.569818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:33.015308   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:35.513073   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:35.945264   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:38.444913   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:36.760740   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:39.260499   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:37.107492   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:37.120829   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:37.120901   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:37.154556   80228 cri.go:89] found id: ""
	I0814 17:39:37.154589   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.154601   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:37.154609   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:37.154673   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:37.192570   80228 cri.go:89] found id: ""
	I0814 17:39:37.192602   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.192609   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:37.192615   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:37.192679   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:37.225845   80228 cri.go:89] found id: ""
	I0814 17:39:37.225891   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.225902   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:37.225917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:37.225986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:37.262370   80228 cri.go:89] found id: ""
	I0814 17:39:37.262399   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.262408   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:37.262416   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:37.262481   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:37.297642   80228 cri.go:89] found id: ""
	I0814 17:39:37.297669   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.297680   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:37.297687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:37.297754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:37.331006   80228 cri.go:89] found id: ""
	I0814 17:39:37.331032   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.331041   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:37.331046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:37.331111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:37.364753   80228 cri.go:89] found id: ""
	I0814 17:39:37.364777   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.364786   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:37.364792   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:37.364850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:37.397722   80228 cri.go:89] found id: ""
	I0814 17:39:37.397748   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.397760   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:37.397770   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:37.397785   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:37.473616   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:37.473643   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:37.473659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:37.557672   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:37.557710   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.596337   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:37.596368   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:37.646815   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:37.646853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.160391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:40.174099   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:40.174181   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:40.208783   80228 cri.go:89] found id: ""
	I0814 17:39:40.208814   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.208821   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:40.208829   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:40.208880   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:40.243555   80228 cri.go:89] found id: ""
	I0814 17:39:40.243580   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.243588   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:40.243594   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:40.243661   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:40.276685   80228 cri.go:89] found id: ""
	I0814 17:39:40.276711   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.276723   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:40.276731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:40.276795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:40.309893   80228 cri.go:89] found id: ""
	I0814 17:39:40.309925   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.309937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:40.309944   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:40.310073   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:40.341724   80228 cri.go:89] found id: ""
	I0814 17:39:40.341751   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.341762   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:40.341770   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:40.341834   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:40.376442   80228 cri.go:89] found id: ""
	I0814 17:39:40.376478   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.376487   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:40.376495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:40.376558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:40.419240   80228 cri.go:89] found id: ""
	I0814 17:39:40.419269   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.419277   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:40.419284   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:40.419374   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:40.464678   80228 cri.go:89] found id: ""
	I0814 17:39:40.464703   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.464712   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:40.464721   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:40.464737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:40.531138   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:40.531175   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.546809   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:40.546842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:40.618791   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:40.618809   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:40.618821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:40.706169   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:40.706219   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.513604   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:40.013349   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:40.445989   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:42.944417   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:41.261429   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:43.760436   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:43.250987   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:43.266109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:43.266179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:43.301860   80228 cri.go:89] found id: ""
	I0814 17:39:43.301891   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.301899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:43.301908   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:43.301991   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:43.337166   80228 cri.go:89] found id: ""
	I0814 17:39:43.337195   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.337205   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:43.337212   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:43.337262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:43.370640   80228 cri.go:89] found id: ""
	I0814 17:39:43.370671   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.370683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:43.370696   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:43.370752   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:43.405598   80228 cri.go:89] found id: ""
	I0814 17:39:43.405624   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.405632   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:43.405638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:43.405705   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:43.437161   80228 cri.go:89] found id: ""
	I0814 17:39:43.437184   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.437192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:43.437198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:43.437295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:43.470675   80228 cri.go:89] found id: ""
	I0814 17:39:43.470707   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.470718   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:43.470726   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:43.470787   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:43.503036   80228 cri.go:89] found id: ""
	I0814 17:39:43.503062   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.503073   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:43.503081   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:43.503149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:43.538269   80228 cri.go:89] found id: ""
	I0814 17:39:43.538296   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.538304   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:43.538328   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:43.538340   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:43.621889   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:43.621936   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:43.667460   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:43.667491   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:43.723630   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:43.723663   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:43.738905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:43.738939   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:43.805484   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:46.306031   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:42.512438   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:44.513112   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.513203   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:45.445470   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:47.944790   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.260236   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:48.260662   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.324624   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:46.324696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:46.360039   80228 cri.go:89] found id: ""
	I0814 17:39:46.360066   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.360074   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:46.360082   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:46.360131   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:46.413735   80228 cri.go:89] found id: ""
	I0814 17:39:46.413767   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.413779   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:46.413788   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:46.413876   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:46.458823   80228 cri.go:89] found id: ""
	I0814 17:39:46.458851   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.458861   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:46.458869   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:46.458928   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:46.495347   80228 cri.go:89] found id: ""
	I0814 17:39:46.495378   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.495387   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:46.495392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:46.495441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:46.531502   80228 cri.go:89] found id: ""
	I0814 17:39:46.531533   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.531545   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:46.531554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:46.531624   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:46.564450   80228 cri.go:89] found id: ""
	I0814 17:39:46.564473   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.564482   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:46.564488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:46.564535   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:46.598293   80228 cri.go:89] found id: ""
	I0814 17:39:46.598401   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.598421   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:46.598431   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:46.598498   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:46.632370   80228 cri.go:89] found id: ""
	I0814 17:39:46.632400   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.632411   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:46.632423   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:46.632438   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:46.711814   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:46.711848   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:46.749410   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:46.749443   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:46.801686   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:46.801720   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:46.815196   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:46.815218   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:46.885648   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.386223   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:49.399359   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:49.399430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:49.432133   80228 cri.go:89] found id: ""
	I0814 17:39:49.432168   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.432179   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:49.432186   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:49.432250   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:49.469760   80228 cri.go:89] found id: ""
	I0814 17:39:49.469790   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.469799   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:49.469811   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:49.469873   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:49.500437   80228 cri.go:89] found id: ""
	I0814 17:39:49.500466   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.500474   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:49.500481   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:49.500531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:49.533685   80228 cri.go:89] found id: ""
	I0814 17:39:49.533709   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.533717   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:49.533723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:49.533790   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:49.570551   80228 cri.go:89] found id: ""
	I0814 17:39:49.570577   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.570584   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:49.570590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:49.570654   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:49.606649   80228 cri.go:89] found id: ""
	I0814 17:39:49.606672   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.606680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:49.606686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:49.606734   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:49.638060   80228 cri.go:89] found id: ""
	I0814 17:39:49.638090   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.638101   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:49.638109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:49.638178   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:49.674503   80228 cri.go:89] found id: ""
	I0814 17:39:49.674526   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.674534   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:49.674543   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:49.674563   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:49.710185   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:49.710213   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:49.764112   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:49.764146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:49.777862   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:49.777888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:49.849786   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.849806   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:49.849819   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:48.513418   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:51.013242   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:50.444526   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.444788   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:54.944646   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:50.759890   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.760236   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:54.760324   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.429811   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:52.444364   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:52.444441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:52.483047   80228 cri.go:89] found id: ""
	I0814 17:39:52.483074   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.483085   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:52.483093   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:52.483157   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:52.520236   80228 cri.go:89] found id: ""
	I0814 17:39:52.520264   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.520274   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:52.520287   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:52.520353   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:52.553757   80228 cri.go:89] found id: ""
	I0814 17:39:52.553784   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.553795   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:52.553802   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:52.553869   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:52.588782   80228 cri.go:89] found id: ""
	I0814 17:39:52.588808   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.588818   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:52.588827   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:52.588893   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:52.620144   80228 cri.go:89] found id: ""
	I0814 17:39:52.620180   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.620192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:52.620201   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:52.620274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:52.652712   80228 cri.go:89] found id: ""
	I0814 17:39:52.652743   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.652755   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:52.652763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:52.652825   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:52.687789   80228 cri.go:89] found id: ""
	I0814 17:39:52.687819   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.687831   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:52.687838   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:52.687892   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:52.718996   80228 cri.go:89] found id: ""
	I0814 17:39:52.719021   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.719031   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:52.719041   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:52.719055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:52.775775   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:52.775808   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:52.789024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:52.789055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:52.863320   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:52.863351   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:52.863366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:52.941533   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:52.941571   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:55.477833   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:55.490723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:55.490783   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:55.525816   80228 cri.go:89] found id: ""
	I0814 17:39:55.525844   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.525852   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:55.525859   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:55.525908   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:55.561855   80228 cri.go:89] found id: ""
	I0814 17:39:55.561878   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.561887   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:55.561892   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:55.561949   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:55.599997   80228 cri.go:89] found id: ""
	I0814 17:39:55.600027   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.600038   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:55.600046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:55.600112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:55.632869   80228 cri.go:89] found id: ""
	I0814 17:39:55.632902   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.632914   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:55.632922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:55.632990   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:55.666029   80228 cri.go:89] found id: ""
	I0814 17:39:55.666055   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.666066   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:55.666079   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:55.666136   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:55.697222   80228 cri.go:89] found id: ""
	I0814 17:39:55.697247   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.697254   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:55.697260   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:55.697308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:55.729517   80228 cri.go:89] found id: ""
	I0814 17:39:55.729549   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.729561   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:55.729576   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:55.729640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:55.763890   80228 cri.go:89] found id: ""
	I0814 17:39:55.763922   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.763934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:55.763944   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:55.763960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:55.819588   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:55.819624   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:55.833281   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:55.833314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:55.904610   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:55.904632   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:55.904644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:55.981035   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:55.981069   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:53.513407   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:55.513734   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:56.945649   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:59.444937   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:57.259832   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:59.760669   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:58.522870   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:58.536151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:58.536224   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:58.568827   80228 cri.go:89] found id: ""
	I0814 17:39:58.568857   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.568869   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:58.568877   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:58.568946   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:58.600523   80228 cri.go:89] found id: ""
	I0814 17:39:58.600554   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.600564   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:58.600571   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:58.600640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:58.634201   80228 cri.go:89] found id: ""
	I0814 17:39:58.634232   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.634240   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:58.634245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:58.634308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:58.668746   80228 cri.go:89] found id: ""
	I0814 17:39:58.668772   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.668781   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:58.668787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:58.668847   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:58.699695   80228 cri.go:89] found id: ""
	I0814 17:39:58.699727   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.699739   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:58.699752   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:58.699836   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:58.731047   80228 cri.go:89] found id: ""
	I0814 17:39:58.731081   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.731095   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:58.731103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:58.731168   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:58.773454   80228 cri.go:89] found id: ""
	I0814 17:39:58.773486   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.773495   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:58.773501   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:58.773561   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:58.810135   80228 cri.go:89] found id: ""
	I0814 17:39:58.810159   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.810166   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:58.810175   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:58.810191   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:58.844897   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:58.844925   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:58.901700   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:58.901745   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:58.914272   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:58.914296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:58.984593   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:58.984610   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:58.984622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:57.513854   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:00.013241   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:01.945861   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.444575   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:02.262241   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.760164   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:01.563227   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:01.576764   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:01.576840   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:01.610842   80228 cri.go:89] found id: ""
	I0814 17:40:01.610871   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.610878   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:01.610884   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:01.610935   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:01.643774   80228 cri.go:89] found id: ""
	I0814 17:40:01.643806   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.643816   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:01.643824   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:01.643888   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:01.677867   80228 cri.go:89] found id: ""
	I0814 17:40:01.677892   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.677899   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:01.677906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:01.677967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:01.712394   80228 cri.go:89] found id: ""
	I0814 17:40:01.712420   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.712427   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:01.712433   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:01.712492   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:01.745637   80228 cri.go:89] found id: ""
	I0814 17:40:01.745666   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.745676   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:01.745683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:01.745745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:01.782364   80228 cri.go:89] found id: ""
	I0814 17:40:01.782394   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.782404   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:01.782411   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:01.782484   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:01.814569   80228 cri.go:89] found id: ""
	I0814 17:40:01.814596   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.814605   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:01.814614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:01.814674   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:01.850421   80228 cri.go:89] found id: ""
	I0814 17:40:01.850450   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.850459   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:01.850468   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:01.850482   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:01.862965   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:01.863001   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:01.931312   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:01.931357   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:01.931375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:02.008236   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:02.008278   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:02.043238   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:02.043267   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:04.596909   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:04.610091   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:04.610158   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:04.645169   80228 cri.go:89] found id: ""
	I0814 17:40:04.645195   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.645205   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:04.645213   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:04.645279   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:04.677708   80228 cri.go:89] found id: ""
	I0814 17:40:04.677740   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.677750   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:04.677761   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:04.677823   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:04.710319   80228 cri.go:89] found id: ""
	I0814 17:40:04.710351   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.710362   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:04.710374   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:04.710443   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:04.745166   80228 cri.go:89] found id: ""
	I0814 17:40:04.745202   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.745219   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:04.745226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:04.745287   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:04.777307   80228 cri.go:89] found id: ""
	I0814 17:40:04.777354   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.777376   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:04.777383   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:04.777447   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:04.813854   80228 cri.go:89] found id: ""
	I0814 17:40:04.813886   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.813901   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:04.813908   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:04.813972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:04.848014   80228 cri.go:89] found id: ""
	I0814 17:40:04.848041   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.848049   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:04.848055   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:04.848113   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:04.882689   80228 cri.go:89] found id: ""
	I0814 17:40:04.882719   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.882731   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:04.882742   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:04.882760   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:04.952074   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:04.952096   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:04.952112   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:05.030258   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:05.030300   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:05.066509   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:05.066542   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:05.120153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:05.120195   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:02.512935   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.513254   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:06.445637   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:08.945142   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:06.760223   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:08.760857   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:07.634404   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:07.646900   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:07.646966   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:07.678654   80228 cri.go:89] found id: ""
	I0814 17:40:07.678680   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.678689   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:07.678696   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:07.678753   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:07.711355   80228 cri.go:89] found id: ""
	I0814 17:40:07.711381   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.711389   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:07.711395   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:07.711446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:07.744134   80228 cri.go:89] found id: ""
	I0814 17:40:07.744161   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.744169   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:07.744179   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:07.744242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:07.776981   80228 cri.go:89] found id: ""
	I0814 17:40:07.777008   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.777015   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:07.777022   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:07.777086   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:07.811626   80228 cri.go:89] found id: ""
	I0814 17:40:07.811651   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.811661   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:07.811667   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:07.811720   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:07.843218   80228 cri.go:89] found id: ""
	I0814 17:40:07.843251   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.843262   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:07.843270   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:07.843355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:07.875208   80228 cri.go:89] found id: ""
	I0814 17:40:07.875232   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.875239   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:07.875245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:07.875295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:07.907896   80228 cri.go:89] found id: ""
	I0814 17:40:07.907923   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.907934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:07.907945   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:07.907960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:07.959717   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:07.959753   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:07.973050   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:07.973081   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:08.035085   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:08.035107   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:08.035120   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:08.109722   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:08.109770   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:10.648203   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:10.661194   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:10.661280   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:10.698401   80228 cri.go:89] found id: ""
	I0814 17:40:10.698431   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.698442   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:10.698450   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:10.698515   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:10.730057   80228 cri.go:89] found id: ""
	I0814 17:40:10.730083   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.730094   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:10.730101   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:10.730163   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:10.768780   80228 cri.go:89] found id: ""
	I0814 17:40:10.768807   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.768817   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:10.768824   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:10.768885   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:10.800866   80228 cri.go:89] found id: ""
	I0814 17:40:10.800898   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.800907   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:10.800917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:10.800984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:10.833741   80228 cri.go:89] found id: ""
	I0814 17:40:10.833771   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.833782   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:10.833789   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:10.833850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:10.865670   80228 cri.go:89] found id: ""
	I0814 17:40:10.865699   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.865706   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:10.865717   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:10.865770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:10.904726   80228 cri.go:89] found id: ""
	I0814 17:40:10.904757   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.904765   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:10.904771   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:10.904821   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:10.940549   80228 cri.go:89] found id: ""
	I0814 17:40:10.940578   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.940588   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:10.940598   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:10.940620   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:10.992592   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:10.992622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:11.006388   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:11.006412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:11.075455   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:11.075473   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:11.075486   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:11.156622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:11.156658   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:07.012878   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:09.013908   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.512592   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.444764   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.944931   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.259959   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.760823   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.695055   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:13.709460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:13.709531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:13.741941   80228 cri.go:89] found id: ""
	I0814 17:40:13.741967   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.741975   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:13.741981   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:13.742042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:13.773916   80228 cri.go:89] found id: ""
	I0814 17:40:13.773940   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.773947   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:13.773952   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:13.773999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:13.807871   80228 cri.go:89] found id: ""
	I0814 17:40:13.807902   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.807912   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:13.807918   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:13.807981   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:13.840902   80228 cri.go:89] found id: ""
	I0814 17:40:13.840931   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.840943   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:13.840952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:13.841018   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:13.871969   80228 cri.go:89] found id: ""
	I0814 17:40:13.871998   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.872010   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:13.872019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:13.872090   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:13.905502   80228 cri.go:89] found id: ""
	I0814 17:40:13.905524   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.905531   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:13.905537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:13.905599   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:13.937356   80228 cri.go:89] found id: ""
	I0814 17:40:13.937386   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.937396   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:13.937404   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:13.937466   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:13.972383   80228 cri.go:89] found id: ""
	I0814 17:40:13.972410   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.972418   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:13.972427   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:13.972448   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:14.022691   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:14.022717   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:14.035543   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:14.035567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:14.104869   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:14.104889   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:14.104905   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:14.182185   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:14.182221   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:13.513519   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.012958   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:15.945499   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:18.445122   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.259488   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:18.259706   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.259972   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.720519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:16.734323   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:16.734406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:16.769454   80228 cri.go:89] found id: ""
	I0814 17:40:16.769483   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.769493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:16.769501   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:16.769565   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:16.801513   80228 cri.go:89] found id: ""
	I0814 17:40:16.801541   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.801548   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:16.801554   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:16.801610   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:16.835184   80228 cri.go:89] found id: ""
	I0814 17:40:16.835212   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.835220   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:16.835226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:16.835275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:16.867162   80228 cri.go:89] found id: ""
	I0814 17:40:16.867192   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.867201   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:16.867207   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:16.867257   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:16.902912   80228 cri.go:89] found id: ""
	I0814 17:40:16.902942   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.902953   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:16.902961   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:16.903026   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:16.935004   80228 cri.go:89] found id: ""
	I0814 17:40:16.935033   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.935044   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:16.935052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:16.935115   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:16.969082   80228 cri.go:89] found id: ""
	I0814 17:40:16.969110   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.969120   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:16.969127   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:16.969194   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:17.002594   80228 cri.go:89] found id: ""
	I0814 17:40:17.002622   80228 logs.go:276] 0 containers: []
	W0814 17:40:17.002633   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:17.002644   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:17.002659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:17.054319   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:17.054359   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:17.068024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:17.068048   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:17.139480   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:17.139499   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:17.139514   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:17.222086   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:17.222140   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:19.758630   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:19.772186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:19.772254   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:19.807719   80228 cri.go:89] found id: ""
	I0814 17:40:19.807751   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.807760   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:19.807766   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:19.807830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:19.851023   80228 cri.go:89] found id: ""
	I0814 17:40:19.851054   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.851067   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:19.851083   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:19.851154   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:19.882961   80228 cri.go:89] found id: ""
	I0814 17:40:19.882987   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.882997   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:19.883005   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:19.883063   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:19.920312   80228 cri.go:89] found id: ""
	I0814 17:40:19.920345   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.920356   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:19.920365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:19.920430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:19.953628   80228 cri.go:89] found id: ""
	I0814 17:40:19.953658   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.953671   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:19.953683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:19.953741   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:19.984998   80228 cri.go:89] found id: ""
	I0814 17:40:19.985028   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.985036   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:19.985043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:19.985092   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:20.018728   80228 cri.go:89] found id: ""
	I0814 17:40:20.018753   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.018761   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:20.018766   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:20.018814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:20.050718   80228 cri.go:89] found id: ""
	I0814 17:40:20.050743   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.050757   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:20.050765   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:20.050777   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:20.101567   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:20.101602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:20.114890   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:20.114920   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:20.183926   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:20.183948   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:20.183960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:20.270195   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:20.270223   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:18.513348   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.513633   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.445352   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.945704   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.260365   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:24.760475   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.807078   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:22.820187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:22.820260   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:22.852474   80228 cri.go:89] found id: ""
	I0814 17:40:22.852504   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.852514   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:22.852522   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:22.852596   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:22.887141   80228 cri.go:89] found id: ""
	I0814 17:40:22.887167   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.887177   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:22.887184   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:22.887248   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:22.919384   80228 cri.go:89] found id: ""
	I0814 17:40:22.919417   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.919428   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:22.919436   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:22.919502   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:22.951877   80228 cri.go:89] found id: ""
	I0814 17:40:22.951897   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.951905   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:22.951910   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:22.951965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:22.987712   80228 cri.go:89] found id: ""
	I0814 17:40:22.987742   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.987752   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:22.987760   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:22.987832   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:23.025562   80228 cri.go:89] found id: ""
	I0814 17:40:23.025597   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.025608   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:23.025616   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:23.025680   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:23.058928   80228 cri.go:89] found id: ""
	I0814 17:40:23.058955   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.058962   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:23.058969   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:23.059025   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:23.096807   80228 cri.go:89] found id: ""
	I0814 17:40:23.096836   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.096847   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:23.096858   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:23.096874   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:23.148943   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:23.148977   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:23.161905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:23.161927   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:23.232119   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:23.232147   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:23.232160   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:23.320693   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:23.320731   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:25.858506   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:25.871891   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:25.871964   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:25.904732   80228 cri.go:89] found id: ""
	I0814 17:40:25.904760   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.904769   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:25.904775   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:25.904830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:25.936317   80228 cri.go:89] found id: ""
	I0814 17:40:25.936347   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.936358   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:25.936365   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:25.936427   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:25.969921   80228 cri.go:89] found id: ""
	I0814 17:40:25.969946   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.969954   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:25.969960   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:25.970009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:26.022832   80228 cri.go:89] found id: ""
	I0814 17:40:26.022862   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.022872   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:26.022880   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:26.022941   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:26.056178   80228 cri.go:89] found id: ""
	I0814 17:40:26.056206   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.056214   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:26.056224   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:26.056275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:26.086921   80228 cri.go:89] found id: ""
	I0814 17:40:26.086955   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.086966   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:26.086974   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:26.087031   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:26.120631   80228 cri.go:89] found id: ""
	I0814 17:40:26.120665   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.120677   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:26.120686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:26.120745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:26.154258   80228 cri.go:89] found id: ""
	I0814 17:40:26.154289   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.154300   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:26.154310   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:26.154324   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:26.208366   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:26.208405   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:26.222160   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:26.222192   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:26.294737   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:26.294756   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:26.294768   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:22.513813   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:25.013707   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:25.444691   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:27.944277   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.945043   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:27.260184   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.262080   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:26.372870   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:26.372906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:28.908165   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:28.920754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:28.920816   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:28.953950   80228 cri.go:89] found id: ""
	I0814 17:40:28.953971   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.953978   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:28.953987   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:28.954035   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:28.985228   80228 cri.go:89] found id: ""
	I0814 17:40:28.985266   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.985278   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:28.985286   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:28.985347   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:29.016295   80228 cri.go:89] found id: ""
	I0814 17:40:29.016328   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.016336   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:29.016341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:29.016392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:29.048664   80228 cri.go:89] found id: ""
	I0814 17:40:29.048696   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.048707   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:29.048715   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:29.048778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:29.080441   80228 cri.go:89] found id: ""
	I0814 17:40:29.080466   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.080474   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:29.080520   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:29.080584   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:29.112377   80228 cri.go:89] found id: ""
	I0814 17:40:29.112407   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.112418   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:29.112426   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:29.112493   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:29.145368   80228 cri.go:89] found id: ""
	I0814 17:40:29.145395   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.145403   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:29.145409   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:29.145471   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:29.177305   80228 cri.go:89] found id: ""
	I0814 17:40:29.177333   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.177341   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:29.177350   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:29.177366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:29.232156   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:29.232197   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:29.245286   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:29.245317   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:29.322257   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:29.322286   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:29.322302   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:29.397679   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:29.397714   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:27.512862   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.514756   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.945087   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.444743   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.760242   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.259825   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.935264   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:31.948380   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:31.948446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:31.978898   80228 cri.go:89] found id: ""
	I0814 17:40:31.978925   80228 logs.go:276] 0 containers: []
	W0814 17:40:31.978932   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:31.978939   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:31.978989   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:32.010652   80228 cri.go:89] found id: ""
	I0814 17:40:32.010681   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.010692   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:32.010699   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:32.010767   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:32.044821   80228 cri.go:89] found id: ""
	I0814 17:40:32.044852   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.044860   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:32.044866   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:32.044915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:32.076359   80228 cri.go:89] found id: ""
	I0814 17:40:32.076388   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.076398   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:32.076406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:32.076469   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:32.107652   80228 cri.go:89] found id: ""
	I0814 17:40:32.107680   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.107692   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:32.107709   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:32.107770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:32.138445   80228 cri.go:89] found id: ""
	I0814 17:40:32.138473   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.138484   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:32.138492   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:32.138558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:32.173771   80228 cri.go:89] found id: ""
	I0814 17:40:32.173794   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.173802   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:32.173807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:32.173857   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:32.206387   80228 cri.go:89] found id: ""
	I0814 17:40:32.206418   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.206429   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:32.206441   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:32.206454   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.258114   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:32.258148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:32.271984   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:32.272009   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:32.335423   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:32.335447   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:32.335464   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:32.411155   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:32.411206   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:34.975280   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:34.988098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:34.988176   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:35.022020   80228 cri.go:89] found id: ""
	I0814 17:40:35.022047   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.022062   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:35.022071   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:35.022124   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:35.055528   80228 cri.go:89] found id: ""
	I0814 17:40:35.055568   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.055578   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:35.055586   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:35.055647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:35.088373   80228 cri.go:89] found id: ""
	I0814 17:40:35.088404   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.088415   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:35.088422   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:35.088489   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:35.123162   80228 cri.go:89] found id: ""
	I0814 17:40:35.123188   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.123198   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:35.123206   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:35.123268   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:35.160240   80228 cri.go:89] found id: ""
	I0814 17:40:35.160267   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.160277   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:35.160286   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:35.160348   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:35.196249   80228 cri.go:89] found id: ""
	I0814 17:40:35.196276   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.196285   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:35.196293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:35.196359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:35.232564   80228 cri.go:89] found id: ""
	I0814 17:40:35.232588   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.232598   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:35.232606   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:35.232671   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:35.267357   80228 cri.go:89] found id: ""
	I0814 17:40:35.267383   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.267392   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:35.267399   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:35.267412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:35.279779   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:35.279806   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:35.347748   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:35.347769   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:35.347782   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:35.427900   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:35.427932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:35.468925   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:35.468953   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.013942   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.513138   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:36.944749   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.444665   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:36.760292   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.260430   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:38.020581   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:38.034985   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:38.035066   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:38.070206   80228 cri.go:89] found id: ""
	I0814 17:40:38.070231   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.070240   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:38.070246   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:38.070294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:38.103859   80228 cri.go:89] found id: ""
	I0814 17:40:38.103885   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.103893   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:38.103898   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:38.103947   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:38.138247   80228 cri.go:89] found id: ""
	I0814 17:40:38.138271   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.138278   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:38.138285   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:38.138345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:38.179475   80228 cri.go:89] found id: ""
	I0814 17:40:38.179511   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.179520   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:38.179526   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:38.179578   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:38.224892   80228 cri.go:89] found id: ""
	I0814 17:40:38.224922   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.224932   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:38.224940   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:38.224996   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:38.270456   80228 cri.go:89] found id: ""
	I0814 17:40:38.270485   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.270497   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:38.270504   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:38.270569   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:38.305267   80228 cri.go:89] found id: ""
	I0814 17:40:38.305300   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.305308   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:38.305315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:38.305387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:38.336942   80228 cri.go:89] found id: ""
	I0814 17:40:38.336978   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.336989   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:38.337000   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:38.337016   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:38.388618   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:38.388651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:38.403442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:38.403472   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:38.478225   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:38.478256   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:38.478273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:38.553400   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:38.553440   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:41.089947   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:41.101989   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:41.102070   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:41.133743   80228 cri.go:89] found id: ""
	I0814 17:40:41.133767   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.133774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:41.133780   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:41.133828   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:41.169671   80228 cri.go:89] found id: ""
	I0814 17:40:41.169706   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.169714   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:41.169721   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:41.169773   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:41.203425   80228 cri.go:89] found id: ""
	I0814 17:40:41.203451   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.203459   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:41.203475   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:41.203534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:41.237031   80228 cri.go:89] found id: ""
	I0814 17:40:41.237064   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.237075   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:41.237084   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:41.237149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:41.271095   80228 cri.go:89] found id: ""
	I0814 17:40:41.271120   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.271128   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:41.271134   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:41.271190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:41.303640   80228 cri.go:89] found id: ""
	I0814 17:40:41.303672   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.303684   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:41.303692   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:41.303755   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:37.013555   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.013733   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.013910   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.943472   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:43.944582   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.261795   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:43.759672   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.336010   80228 cri.go:89] found id: ""
	I0814 17:40:41.336047   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.336062   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:41.336071   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:41.336140   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:41.370098   80228 cri.go:89] found id: ""
	I0814 17:40:41.370133   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.370143   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:41.370154   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:41.370168   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:41.420760   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:41.420794   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:41.433651   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:41.433678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:41.506623   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:41.506644   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:41.506657   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:41.591390   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:41.591426   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:44.130649   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:44.144362   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:44.144428   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:44.178485   80228 cri.go:89] found id: ""
	I0814 17:40:44.178516   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.178527   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:44.178535   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:44.178600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:44.214231   80228 cri.go:89] found id: ""
	I0814 17:40:44.214260   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.214268   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:44.214274   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:44.214336   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:44.248483   80228 cri.go:89] found id: ""
	I0814 17:40:44.248513   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.248524   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:44.248531   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:44.248600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:44.282445   80228 cri.go:89] found id: ""
	I0814 17:40:44.282472   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.282481   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:44.282493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:44.282560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:44.315141   80228 cri.go:89] found id: ""
	I0814 17:40:44.315169   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.315190   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:44.315198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:44.315259   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:44.346756   80228 cri.go:89] found id: ""
	I0814 17:40:44.346781   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.346789   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:44.346795   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:44.346853   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:44.378143   80228 cri.go:89] found id: ""
	I0814 17:40:44.378172   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.378183   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:44.378191   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:44.378255   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:44.411526   80228 cri.go:89] found id: ""
	I0814 17:40:44.411557   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.411567   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:44.411578   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:44.411592   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:44.459873   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:44.459913   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:44.473112   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:44.473148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:44.547514   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:44.547546   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:44.547579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:44.630377   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:44.630415   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:43.512113   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.512590   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.945080   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:47.946506   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.760626   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:48.260015   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.260186   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:47.173094   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:47.185854   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:47.185927   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:47.228755   80228 cri.go:89] found id: ""
	I0814 17:40:47.228781   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.228788   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:47.228795   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:47.228851   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:47.264986   80228 cri.go:89] found id: ""
	I0814 17:40:47.265020   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.265031   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:47.265037   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:47.265100   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:47.296900   80228 cri.go:89] found id: ""
	I0814 17:40:47.296929   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.296940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:47.296947   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:47.297009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:47.328120   80228 cri.go:89] found id: ""
	I0814 17:40:47.328147   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.328155   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:47.328161   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:47.328210   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:47.364147   80228 cri.go:89] found id: ""
	I0814 17:40:47.364171   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.364178   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:47.364184   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:47.364238   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:47.400466   80228 cri.go:89] found id: ""
	I0814 17:40:47.400493   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.400501   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:47.400507   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:47.400562   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:47.432681   80228 cri.go:89] found id: ""
	I0814 17:40:47.432713   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.432724   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:47.432732   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:47.432801   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:47.465466   80228 cri.go:89] found id: ""
	I0814 17:40:47.465498   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.465510   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:47.465522   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:47.465536   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:47.502076   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:47.502114   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:47.554451   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:47.554488   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:47.567658   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:47.567690   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:47.635805   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:47.635829   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:47.635844   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:50.215353   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:50.227723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:50.227795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:50.258250   80228 cri.go:89] found id: ""
	I0814 17:40:50.258276   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.258287   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:50.258296   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:50.258363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:50.291371   80228 cri.go:89] found id: ""
	I0814 17:40:50.291406   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.291416   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:50.291423   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:50.291479   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:50.321449   80228 cri.go:89] found id: ""
	I0814 17:40:50.321473   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.321481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:50.321486   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:50.321545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:50.351752   80228 cri.go:89] found id: ""
	I0814 17:40:50.351780   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.351791   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:50.351799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:50.351856   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:50.382022   80228 cri.go:89] found id: ""
	I0814 17:40:50.382050   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.382057   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:50.382063   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:50.382118   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:50.414057   80228 cri.go:89] found id: ""
	I0814 17:40:50.414083   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.414091   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:50.414098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:50.414156   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:50.447508   80228 cri.go:89] found id: ""
	I0814 17:40:50.447530   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.447537   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:50.447543   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:50.447606   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:50.487401   80228 cri.go:89] found id: ""
	I0814 17:40:50.487425   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.487434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:50.487442   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:50.487455   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:50.524404   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:50.524439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:50.578220   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:50.578256   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:50.591405   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:50.591431   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:50.657727   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:50.657750   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:50.657762   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:47.514490   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.012588   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.445363   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:52.944903   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:52.760728   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:54.760918   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:53.237985   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:53.250502   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:53.250572   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:53.285728   80228 cri.go:89] found id: ""
	I0814 17:40:53.285763   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.285774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:53.285784   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:53.285848   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:53.318195   80228 cri.go:89] found id: ""
	I0814 17:40:53.318231   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.318243   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:53.318252   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:53.318317   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:53.350259   80228 cri.go:89] found id: ""
	I0814 17:40:53.350291   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.350302   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:53.350310   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:53.350385   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:53.385894   80228 cri.go:89] found id: ""
	I0814 17:40:53.385920   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.385928   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:53.385934   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:53.385983   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:53.420851   80228 cri.go:89] found id: ""
	I0814 17:40:53.420878   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.420890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:53.420897   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:53.420963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:53.458332   80228 cri.go:89] found id: ""
	I0814 17:40:53.458370   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.458381   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:53.458392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:53.458465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:53.489719   80228 cri.go:89] found id: ""
	I0814 17:40:53.489750   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.489759   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:53.489765   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:53.489820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:53.522942   80228 cri.go:89] found id: ""
	I0814 17:40:53.522977   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.522988   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:53.522998   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:53.523013   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:53.599450   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:53.599492   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:53.637225   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:53.637254   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:53.688605   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:53.688647   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:53.704601   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:53.704633   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:53.775046   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.275201   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:56.288406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:56.288463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:52.013747   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:54.513735   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:56.514335   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:55.445462   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:57.447142   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:59.946025   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:57.261047   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:59.760136   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:56.322862   80228 cri.go:89] found id: ""
	I0814 17:40:56.322891   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.322899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:56.322905   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:56.322954   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:56.356214   80228 cri.go:89] found id: ""
	I0814 17:40:56.356243   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.356262   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:56.356268   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:56.356338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:56.388877   80228 cri.go:89] found id: ""
	I0814 17:40:56.388900   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.388909   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:56.388915   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:56.388967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:56.422552   80228 cri.go:89] found id: ""
	I0814 17:40:56.422577   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.422585   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:56.422590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:56.422649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:56.456995   80228 cri.go:89] found id: ""
	I0814 17:40:56.457018   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.457026   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:56.457031   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:56.457079   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:56.495745   80228 cri.go:89] found id: ""
	I0814 17:40:56.495772   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.495788   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:56.495797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:56.495868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:56.529139   80228 cri.go:89] found id: ""
	I0814 17:40:56.529171   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.529179   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:56.529185   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:56.529237   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:56.561377   80228 cri.go:89] found id: ""
	I0814 17:40:56.561406   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.561414   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:56.561424   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:56.561439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:56.601504   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:56.601537   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:56.653369   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:56.653403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:56.666117   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:56.666144   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:56.731921   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.731949   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:56.731963   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.315712   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:59.328425   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:59.328486   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:59.364056   80228 cri.go:89] found id: ""
	I0814 17:40:59.364080   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.364088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:59.364094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:59.364151   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:59.398948   80228 cri.go:89] found id: ""
	I0814 17:40:59.398971   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.398978   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:59.398984   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:59.399029   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:59.430301   80228 cri.go:89] found id: ""
	I0814 17:40:59.430327   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.430335   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:59.430341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:59.430406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:59.465278   80228 cri.go:89] found id: ""
	I0814 17:40:59.465301   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.465309   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:59.465315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:59.465372   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:59.497544   80228 cri.go:89] found id: ""
	I0814 17:40:59.497575   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.497586   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:59.497595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:59.497659   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:59.529463   80228 cri.go:89] found id: ""
	I0814 17:40:59.529494   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.529506   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:59.529513   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:59.529587   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:59.562448   80228 cri.go:89] found id: ""
	I0814 17:40:59.562477   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.562487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:59.562495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:59.562609   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:59.594059   80228 cri.go:89] found id: ""
	I0814 17:40:59.594089   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.594103   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:59.594112   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:59.594123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.672139   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:59.672172   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:59.710714   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:59.710743   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:59.762645   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:59.762676   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:59.776006   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:59.776033   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:59.838187   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:59.013030   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:01.013280   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.445595   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:04.944484   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.260244   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:04.760862   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.338964   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:02.351381   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:02.351460   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:02.383206   80228 cri.go:89] found id: ""
	I0814 17:41:02.383235   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.383244   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:02.383250   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:02.383310   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:02.417016   80228 cri.go:89] found id: ""
	I0814 17:41:02.417042   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.417049   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:02.417055   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:02.417111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:02.451936   80228 cri.go:89] found id: ""
	I0814 17:41:02.451964   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.451974   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:02.451982   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:02.452042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:02.489896   80228 cri.go:89] found id: ""
	I0814 17:41:02.489927   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.489937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:02.489945   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:02.490011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:02.524273   80228 cri.go:89] found id: ""
	I0814 17:41:02.524308   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.524339   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:02.524346   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:02.524409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:02.558813   80228 cri.go:89] found id: ""
	I0814 17:41:02.558842   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.558850   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:02.558861   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:02.558917   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:02.592704   80228 cri.go:89] found id: ""
	I0814 17:41:02.592733   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.592747   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:02.592753   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:02.592818   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:02.625250   80228 cri.go:89] found id: ""
	I0814 17:41:02.625277   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.625288   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:02.625299   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:02.625312   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:02.677577   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:02.677613   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:02.691407   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:02.691439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:02.756797   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:02.756869   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:02.756888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:02.830803   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:02.830842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.370085   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:05.385272   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:05.385342   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:05.421775   80228 cri.go:89] found id: ""
	I0814 17:41:05.421799   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.421806   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:05.421812   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:05.421860   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:05.457054   80228 cri.go:89] found id: ""
	I0814 17:41:05.457083   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.457093   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:05.457100   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:05.457153   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:05.489290   80228 cri.go:89] found id: ""
	I0814 17:41:05.489330   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.489338   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:05.489345   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:05.489392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:05.527066   80228 cri.go:89] found id: ""
	I0814 17:41:05.527091   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.527098   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:05.527105   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:05.527155   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:05.563882   80228 cri.go:89] found id: ""
	I0814 17:41:05.563915   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.563925   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:05.563931   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:05.563982   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:05.601837   80228 cri.go:89] found id: ""
	I0814 17:41:05.601863   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.601871   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:05.601879   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:05.601940   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:05.633503   80228 cri.go:89] found id: ""
	I0814 17:41:05.633531   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.633539   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:05.633545   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:05.633615   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:05.668281   80228 cri.go:89] found id: ""
	I0814 17:41:05.668312   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.668324   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:05.668335   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:05.668349   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:05.747214   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:05.747249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.784408   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:05.784441   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:05.835067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:05.835103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:05.847938   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:05.847966   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:05.917404   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:03.513033   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:05.514476   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:06.944595   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:08.944850   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:07.260430   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:09.762513   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:08.417559   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:08.431092   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:08.431165   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:08.465357   80228 cri.go:89] found id: ""
	I0814 17:41:08.465515   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.465543   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:08.465560   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:08.465675   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:08.499085   80228 cri.go:89] found id: ""
	I0814 17:41:08.499114   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.499123   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:08.499129   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:08.499180   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:08.533881   80228 cri.go:89] found id: ""
	I0814 17:41:08.533909   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.533917   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:08.533922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:08.533972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:08.570503   80228 cri.go:89] found id: ""
	I0814 17:41:08.570549   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.570560   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:08.570572   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:08.570649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:08.602557   80228 cri.go:89] found id: ""
	I0814 17:41:08.602599   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.602610   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:08.602691   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:08.602785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:08.636174   80228 cri.go:89] found id: ""
	I0814 17:41:08.636199   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.636206   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:08.636213   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:08.636261   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:08.672774   80228 cri.go:89] found id: ""
	I0814 17:41:08.672804   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.672815   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:08.672823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:08.672890   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:08.705535   80228 cri.go:89] found id: ""
	I0814 17:41:08.705590   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.705605   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:08.705622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:08.705641   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:08.744315   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:08.744341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:08.794632   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:08.794666   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:08.808089   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:08.808117   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:08.876417   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:08.876436   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:08.876452   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:08.013688   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:10.512639   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:11.444206   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:13.944056   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:12.260065   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:14.759640   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:11.458562   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:11.470905   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:11.470965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:11.505992   80228 cri.go:89] found id: ""
	I0814 17:41:11.506023   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.506036   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:11.506044   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:11.506112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:11.540893   80228 cri.go:89] found id: ""
	I0814 17:41:11.540922   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.540932   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:11.540945   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:11.541001   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:11.575423   80228 cri.go:89] found id: ""
	I0814 17:41:11.575448   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.575455   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:11.575462   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:11.575520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:11.608126   80228 cri.go:89] found id: ""
	I0814 17:41:11.608155   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.608164   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:11.608171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:11.608222   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:11.640165   80228 cri.go:89] found id: ""
	I0814 17:41:11.640190   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.640198   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:11.640204   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:11.640263   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:11.674425   80228 cri.go:89] found id: ""
	I0814 17:41:11.674446   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.674455   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:11.674460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:11.674513   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:11.707448   80228 cri.go:89] found id: ""
	I0814 17:41:11.707477   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.707487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:11.707493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:11.707555   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:11.744309   80228 cri.go:89] found id: ""
	I0814 17:41:11.744338   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.744346   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:11.744363   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:11.744375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:11.824165   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:11.824196   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:11.862013   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:11.862039   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:11.913862   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:11.913902   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:11.927147   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:11.927178   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:11.998403   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.498590   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:14.512847   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:14.512938   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:14.549255   80228 cri.go:89] found id: ""
	I0814 17:41:14.549288   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.549306   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:14.549316   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:14.549382   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:14.588917   80228 cri.go:89] found id: ""
	I0814 17:41:14.588948   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.588956   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:14.588963   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:14.589012   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:14.622581   80228 cri.go:89] found id: ""
	I0814 17:41:14.622611   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.622621   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:14.622628   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:14.622693   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:14.656029   80228 cri.go:89] found id: ""
	I0814 17:41:14.656056   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.656064   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:14.656070   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:14.656117   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:14.687502   80228 cri.go:89] found id: ""
	I0814 17:41:14.687527   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.687536   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:14.687541   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:14.687614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:14.720682   80228 cri.go:89] found id: ""
	I0814 17:41:14.720713   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.720721   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:14.720728   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:14.720778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:14.752482   80228 cri.go:89] found id: ""
	I0814 17:41:14.752511   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.752520   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:14.752525   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:14.752577   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:14.792980   80228 cri.go:89] found id: ""
	I0814 17:41:14.793004   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.793014   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:14.793026   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:14.793042   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:14.845259   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:14.845297   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:14.858530   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:14.858556   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:14.931025   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.931054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:14.931067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:15.008081   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:15.008115   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:13.014174   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:15.512768   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:16.444772   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:16.444802   79521 pod_ready.go:81] duration metric: took 4m0.006448573s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	E0814 17:41:16.444810   79521 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 17:41:16.444817   79521 pod_ready.go:38] duration metric: took 4m5.044051569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:41:16.444832   79521 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:41:16.444858   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:16.444901   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:16.499710   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:16.499742   79521 cri.go:89] found id: ""
	I0814 17:41:16.499751   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:16.499815   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.504467   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:16.504544   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:16.546815   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:16.546842   79521 cri.go:89] found id: ""
	I0814 17:41:16.546851   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:16.546905   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.550917   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:16.550986   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:16.590195   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:16.590216   79521 cri.go:89] found id: ""
	I0814 17:41:16.590224   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:16.590267   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.594123   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:16.594196   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:16.631058   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:16.631091   79521 cri.go:89] found id: ""
	I0814 17:41:16.631101   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:16.631163   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.635151   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:16.635226   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:16.671555   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:16.671582   79521 cri.go:89] found id: ""
	I0814 17:41:16.671592   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:16.671657   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.675790   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:16.675847   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:16.713131   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:16.713157   79521 cri.go:89] found id: ""
	I0814 17:41:16.713165   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:16.713217   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.717296   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:16.717354   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:16.756212   79521 cri.go:89] found id: ""
	I0814 17:41:16.756245   79521 logs.go:276] 0 containers: []
	W0814 17:41:16.756255   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:16.756261   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:16.756324   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:16.802379   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:16.802411   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:16.802417   79521 cri.go:89] found id: ""
	I0814 17:41:16.802431   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:16.802492   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.807105   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.811210   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:16.811241   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:16.852490   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:16.852526   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:16.894384   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:16.894425   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:16.929919   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:16.929949   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:16.965031   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:16.965061   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:17.468878   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.468945   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.482799   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.482826   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:17.610874   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:17.610904   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:17.649292   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:17.649322   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:17.691014   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:17.691045   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:17.749218   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.749254   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.794240   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.794280   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.868805   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:17.868851   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:16.760328   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:18.760369   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:17.544873   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:17.557699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:17.557791   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:17.600314   80228 cri.go:89] found id: ""
	I0814 17:41:17.600347   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.600360   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:17.600370   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:17.600441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:17.634873   80228 cri.go:89] found id: ""
	I0814 17:41:17.634902   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.634914   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:17.634923   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:17.634986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:17.670521   80228 cri.go:89] found id: ""
	I0814 17:41:17.670552   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.670563   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:17.670571   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:17.670647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:17.705587   80228 cri.go:89] found id: ""
	I0814 17:41:17.705612   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.705626   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:17.705632   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:17.705682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:17.768178   80228 cri.go:89] found id: ""
	I0814 17:41:17.768207   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.768218   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:17.768226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:17.768290   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:17.804692   80228 cri.go:89] found id: ""
	I0814 17:41:17.804721   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.804729   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:17.804735   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:17.804795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:17.847994   80228 cri.go:89] found id: ""
	I0814 17:41:17.848030   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.848041   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:17.848052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:17.848122   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:17.883905   80228 cri.go:89] found id: ""
	I0814 17:41:17.883935   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.883944   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:17.883953   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.883965   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.931481   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.931522   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.983315   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.983363   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.996941   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.996981   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:18.067254   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:18.067279   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:18.067295   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:20.642099   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.655941   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.656014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.692525   80228 cri.go:89] found id: ""
	I0814 17:41:20.692554   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.692565   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:20.692577   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.692634   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.727721   80228 cri.go:89] found id: ""
	I0814 17:41:20.727755   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.727769   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:20.727778   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.727845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.770441   80228 cri.go:89] found id: ""
	I0814 17:41:20.770471   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.770481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:20.770488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.770550   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.807932   80228 cri.go:89] found id: ""
	I0814 17:41:20.807961   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.807968   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:20.807975   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.808030   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.849919   80228 cri.go:89] found id: ""
	I0814 17:41:20.849944   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.849963   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:20.849970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.850045   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.887351   80228 cri.go:89] found id: ""
	I0814 17:41:20.887382   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.887393   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:20.887401   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.887465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.921284   80228 cri.go:89] found id: ""
	I0814 17:41:20.921310   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.921321   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.921328   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:20.921409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:20.955238   80228 cri.go:89] found id: ""
	I0814 17:41:20.955267   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.955278   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:20.955288   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.955314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:21.024544   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:21.024565   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.024579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:21.103987   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.104019   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.145515   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.145550   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.197307   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.197346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.514682   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:20.015152   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:20.429364   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.445075   79521 api_server.go:72] duration metric: took 4m16.759338748s to wait for apiserver process to appear ...
	I0814 17:41:20.445102   79521 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:41:20.445133   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.445179   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.477630   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:20.477655   79521 cri.go:89] found id: ""
	I0814 17:41:20.477663   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:20.477714   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.481667   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.481728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.514443   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:20.514465   79521 cri.go:89] found id: ""
	I0814 17:41:20.514473   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:20.514516   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.518344   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.518401   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.559625   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:20.559647   79521 cri.go:89] found id: ""
	I0814 17:41:20.559653   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:20.559706   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.564137   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.564203   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.603504   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:20.603531   79521 cri.go:89] found id: ""
	I0814 17:41:20.603540   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:20.603602   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.608260   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.608334   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.641466   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:20.641487   79521 cri.go:89] found id: ""
	I0814 17:41:20.641494   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:20.641538   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.645566   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.645625   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.685003   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:20.685032   79521 cri.go:89] found id: ""
	I0814 17:41:20.685042   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:20.685104   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.690347   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.690429   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.733753   79521 cri.go:89] found id: ""
	I0814 17:41:20.733782   79521 logs.go:276] 0 containers: []
	W0814 17:41:20.733793   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.733800   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:20.733862   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:20.781659   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:20.781683   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:20.781689   79521 cri.go:89] found id: ""
	I0814 17:41:20.781697   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:20.781753   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.786293   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.790358   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.790377   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:20.916473   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:20.916513   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:20.968706   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:20.968743   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:21.003507   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:21.003546   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:21.049909   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:21.049961   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:21.090052   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:21.090080   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:21.129551   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.129585   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.174792   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.174828   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.247392   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.247440   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:21.261095   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:21.261129   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:21.306583   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:21.306616   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:21.339602   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:21.339642   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:21.397695   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.397732   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.301807   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:41:24.306392   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 200:
	ok
	I0814 17:41:24.307364   79521 api_server.go:141] control plane version: v1.31.0
	I0814 17:41:24.307390   79521 api_server.go:131] duration metric: took 3.862280551s to wait for apiserver health ...
	I0814 17:41:24.307398   79521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:41:24.307418   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:24.307463   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:24.342519   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:24.342552   79521 cri.go:89] found id: ""
	I0814 17:41:24.342561   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:24.342627   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.346361   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:24.346422   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:24.386973   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:24.387001   79521 cri.go:89] found id: ""
	I0814 17:41:24.387012   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:24.387066   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.390942   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:24.390999   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:24.426841   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:24.426863   79521 cri.go:89] found id: ""
	I0814 17:41:24.426872   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:24.426927   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.430856   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:24.430917   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:24.467024   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:24.467050   79521 cri.go:89] found id: ""
	I0814 17:41:24.467059   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:24.467117   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.471659   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:24.471728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:24.506759   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:24.506786   79521 cri.go:89] found id: ""
	I0814 17:41:24.506799   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:24.506857   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.511660   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:24.511728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:24.547768   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:24.547795   79521 cri.go:89] found id: ""
	I0814 17:41:24.547805   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:24.547862   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.552881   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:24.552941   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:24.588519   79521 cri.go:89] found id: ""
	I0814 17:41:24.588544   79521 logs.go:276] 0 containers: []
	W0814 17:41:24.588551   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:24.588557   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:24.588602   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:24.624604   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:24.624626   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:24.624630   79521 cri.go:89] found id: ""
	I0814 17:41:24.624636   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:24.624691   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.628703   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.632611   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:24.632636   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:24.671903   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:24.671935   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:24.709821   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.709851   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:25.107477   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:25.107515   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:25.221012   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:25.221041   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:20.760924   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:23.259780   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.260347   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:23.712584   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:23.726467   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:23.726545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:23.762871   80228 cri.go:89] found id: ""
	I0814 17:41:23.762906   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.762916   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:23.762922   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:23.762972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:23.800068   80228 cri.go:89] found id: ""
	I0814 17:41:23.800096   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.800105   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:23.800113   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:23.800173   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:23.834913   80228 cri.go:89] found id: ""
	I0814 17:41:23.834945   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.834956   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:23.834963   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:23.835022   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:23.871196   80228 cri.go:89] found id: ""
	I0814 17:41:23.871222   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.871233   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:23.871240   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:23.871294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:23.907830   80228 cri.go:89] found id: ""
	I0814 17:41:23.907854   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.907862   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:23.907868   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:23.907926   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:23.941110   80228 cri.go:89] found id: ""
	I0814 17:41:23.941133   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.941141   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:23.941146   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:23.941197   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:23.973602   80228 cri.go:89] found id: ""
	I0814 17:41:23.973631   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.973649   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:23.973655   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:23.973710   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:24.007398   80228 cri.go:89] found id: ""
	I0814 17:41:24.007436   80228 logs.go:276] 0 containers: []
	W0814 17:41:24.007450   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:24.007462   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:24.007478   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:24.061830   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:24.061867   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:24.075012   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:24.075046   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:24.148666   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:24.148692   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.148703   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.230208   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:24.230248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:22.513616   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.013383   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.272397   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:25.272429   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:25.317574   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:25.317603   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:25.352239   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:25.352271   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:25.409997   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:25.410030   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:25.443875   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:25.443899   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:25.490987   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:25.491023   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:25.563495   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:25.563531   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:25.577305   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:25.577345   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:28.147823   79521 system_pods.go:59] 8 kube-system pods found
	I0814 17:41:28.147855   79521 system_pods.go:61] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running
	I0814 17:41:28.147860   79521 system_pods.go:61] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running
	I0814 17:41:28.147864   79521 system_pods.go:61] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running
	I0814 17:41:28.147867   79521 system_pods.go:61] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running
	I0814 17:41:28.147870   79521 system_pods.go:61] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running
	I0814 17:41:28.147874   79521 system_pods.go:61] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running
	I0814 17:41:28.147879   79521 system_pods.go:61] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:41:28.147883   79521 system_pods.go:61] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running
	I0814 17:41:28.147890   79521 system_pods.go:74] duration metric: took 3.840486938s to wait for pod list to return data ...
	I0814 17:41:28.147898   79521 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:41:28.150377   79521 default_sa.go:45] found service account: "default"
	I0814 17:41:28.150398   79521 default_sa.go:55] duration metric: took 2.493777ms for default service account to be created ...
	I0814 17:41:28.150406   79521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:41:28.154470   79521 system_pods.go:86] 8 kube-system pods found
	I0814 17:41:28.154494   79521 system_pods.go:89] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running
	I0814 17:41:28.154500   79521 system_pods.go:89] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running
	I0814 17:41:28.154504   79521 system_pods.go:89] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running
	I0814 17:41:28.154510   79521 system_pods.go:89] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running
	I0814 17:41:28.154514   79521 system_pods.go:89] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running
	I0814 17:41:28.154519   79521 system_pods.go:89] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running
	I0814 17:41:28.154525   79521 system_pods.go:89] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:41:28.154530   79521 system_pods.go:89] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running
	I0814 17:41:28.154537   79521 system_pods.go:126] duration metric: took 4.125964ms to wait for k8s-apps to be running ...
	I0814 17:41:28.154544   79521 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:41:28.154585   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:28.170494   79521 system_svc.go:56] duration metric: took 15.940728ms WaitForService to wait for kubelet
	I0814 17:41:28.170524   79521 kubeadm.go:582] duration metric: took 4m24.484791018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:41:28.170545   79521 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:41:28.173368   79521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:41:28.173395   79521 node_conditions.go:123] node cpu capacity is 2
	I0814 17:41:28.173407   79521 node_conditions.go:105] duration metric: took 2.858344ms to run NodePressure ...
	I0814 17:41:28.173417   79521 start.go:241] waiting for startup goroutines ...
	I0814 17:41:28.173424   79521 start.go:246] waiting for cluster config update ...
	I0814 17:41:28.173435   79521 start.go:255] writing updated cluster config ...
	I0814 17:41:28.173730   79521 ssh_runner.go:195] Run: rm -f paused
	I0814 17:41:28.219460   79521 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:41:28.221461   79521 out.go:177] * Done! kubectl is now configured to use "embed-certs-309673" cluster and "default" namespace by default
	I0814 17:41:27.761580   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:30.260454   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:26.776204   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:26.789057   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:26.789132   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:26.822531   80228 cri.go:89] found id: ""
	I0814 17:41:26.822564   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.822575   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:26.822590   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:26.822651   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:26.855314   80228 cri.go:89] found id: ""
	I0814 17:41:26.855353   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.855365   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:26.855372   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:26.855434   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:26.889389   80228 cri.go:89] found id: ""
	I0814 17:41:26.889413   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.889421   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:26.889427   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:26.889485   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:26.925478   80228 cri.go:89] found id: ""
	I0814 17:41:26.925500   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.925508   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:26.925514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:26.925560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:26.957012   80228 cri.go:89] found id: ""
	I0814 17:41:26.957042   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.957053   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:26.957061   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:26.957114   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:26.989358   80228 cri.go:89] found id: ""
	I0814 17:41:26.989388   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.989399   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:26.989406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:26.989468   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:27.024761   80228 cri.go:89] found id: ""
	I0814 17:41:27.024786   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.024805   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:27.024830   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:27.024895   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:27.059172   80228 cri.go:89] found id: ""
	I0814 17:41:27.059204   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.059215   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:27.059226   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:27.059240   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.096123   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:27.096151   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:27.147689   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:27.147719   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:27.161454   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:27.161483   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:27.234644   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:27.234668   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:27.234680   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:29.817428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:29.831731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:29.831811   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:29.868531   80228 cri.go:89] found id: ""
	I0814 17:41:29.868567   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.868577   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:29.868585   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:29.868657   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:29.913578   80228 cri.go:89] found id: ""
	I0814 17:41:29.913602   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.913611   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:29.913617   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:29.913677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:29.963916   80228 cri.go:89] found id: ""
	I0814 17:41:29.963939   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.963946   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:29.963952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:29.964011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:30.016735   80228 cri.go:89] found id: ""
	I0814 17:41:30.016763   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.016773   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:30.016781   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:30.016841   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:30.048852   80228 cri.go:89] found id: ""
	I0814 17:41:30.048880   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.048890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:30.048898   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:30.048960   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:30.080291   80228 cri.go:89] found id: ""
	I0814 17:41:30.080324   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.080335   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:30.080343   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:30.080506   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:30.113876   80228 cri.go:89] found id: ""
	I0814 17:41:30.113904   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.113914   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:30.113921   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:30.113984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:30.147568   80228 cri.go:89] found id: ""
	I0814 17:41:30.147594   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.147604   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:30.147614   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:30.147627   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:30.197596   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:30.197630   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:30.210576   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:30.210602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:30.277711   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:30.277731   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:30.277746   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:30.356556   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:30.356590   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.013699   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:29.014020   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:31.512974   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:32.760328   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:35.254066   79871 pod_ready.go:81] duration metric: took 4m0.000392709s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" ...
	E0814 17:41:35.254095   79871 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 17:41:35.254112   79871 pod_ready.go:38] duration metric: took 4m12.044429915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:41:35.254137   79871 kubeadm.go:597] duration metric: took 4m20.041916203s to restartPrimaryControlPlane
	W0814 17:41:35.254189   79871 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:35.254218   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:32.892697   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:32.909435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:32.909497   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:32.945055   80228 cri.go:89] found id: ""
	I0814 17:41:32.945080   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.945088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:32.945094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:32.945150   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:32.979266   80228 cri.go:89] found id: ""
	I0814 17:41:32.979294   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.979305   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:32.979312   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:32.979398   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:33.014260   80228 cri.go:89] found id: ""
	I0814 17:41:33.014286   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.014294   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:33.014299   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:33.014351   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:33.047590   80228 cri.go:89] found id: ""
	I0814 17:41:33.047622   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.047633   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:33.047646   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:33.047711   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:33.081258   80228 cri.go:89] found id: ""
	I0814 17:41:33.081294   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.081328   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:33.081337   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:33.081403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:33.112209   80228 cri.go:89] found id: ""
	I0814 17:41:33.112237   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.112247   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:33.112254   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:33.112318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:33.143854   80228 cri.go:89] found id: ""
	I0814 17:41:33.143892   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.143904   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:33.143913   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:33.143977   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:33.175147   80228 cri.go:89] found id: ""
	I0814 17:41:33.175190   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.175201   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:33.175212   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:33.175226   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:33.212877   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:33.212908   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:33.268067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:33.268103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:33.281357   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:33.281386   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:33.350233   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:33.350257   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:33.350269   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:35.929498   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:35.942290   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:35.942354   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:35.975782   80228 cri.go:89] found id: ""
	I0814 17:41:35.975809   80228 logs.go:276] 0 containers: []
	W0814 17:41:35.975818   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:35.975826   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:35.975886   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:36.008165   80228 cri.go:89] found id: ""
	I0814 17:41:36.008191   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.008200   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:36.008206   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:36.008262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:36.044912   80228 cri.go:89] found id: ""
	I0814 17:41:36.044937   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.044945   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:36.044954   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:36.045002   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:36.078068   80228 cri.go:89] found id: ""
	I0814 17:41:36.078096   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.078108   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:36.078116   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:36.078179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:36.110429   80228 cri.go:89] found id: ""
	I0814 17:41:36.110456   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.110467   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:36.110480   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:36.110540   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:36.142086   80228 cri.go:89] found id: ""
	I0814 17:41:36.142111   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.142119   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:36.142125   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:36.142186   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:36.172738   80228 cri.go:89] found id: ""
	I0814 17:41:36.172761   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.172769   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:36.172775   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:36.172831   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:36.204345   80228 cri.go:89] found id: ""
	I0814 17:41:36.204368   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.204376   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:36.204388   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:36.204403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:36.216667   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:36.216689   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:36.279509   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:36.279528   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:36.279540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:33.513591   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:36.013400   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:36.360411   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:36.360447   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:36.398193   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:36.398230   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:38.952415   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:38.968484   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:38.968554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:39.002450   80228 cri.go:89] found id: ""
	I0814 17:41:39.002479   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.002486   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:39.002493   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:39.002551   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:39.035840   80228 cri.go:89] found id: ""
	I0814 17:41:39.035868   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.035876   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:39.035882   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:39.035934   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:39.069900   80228 cri.go:89] found id: ""
	I0814 17:41:39.069929   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.069940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:39.069946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:39.069999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:39.104657   80228 cri.go:89] found id: ""
	I0814 17:41:39.104681   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.104689   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:39.104695   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:39.104751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:39.137279   80228 cri.go:89] found id: ""
	I0814 17:41:39.137312   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.137322   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:39.137330   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:39.137403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:39.170377   80228 cri.go:89] found id: ""
	I0814 17:41:39.170414   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.170424   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:39.170430   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:39.170491   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:39.205742   80228 cri.go:89] found id: ""
	I0814 17:41:39.205779   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.205790   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:39.205796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:39.205850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:39.239954   80228 cri.go:89] found id: ""
	I0814 17:41:39.239979   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.239987   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:39.239994   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:39.240011   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:39.276587   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:39.276619   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:39.329286   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:39.329322   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:39.342232   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:39.342257   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:39.411043   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:39.411063   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:39.411075   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:38.013562   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:40.013740   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:41.994479   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:42.007736   80228 kubeadm.go:597] duration metric: took 4m4.488869114s to restartPrimaryControlPlane
	W0814 17:41:42.007822   80228 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:42.007871   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:42.513259   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:45.013455   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:46.541593   80228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.533697889s)
	I0814 17:41:46.541676   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:46.556181   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:41:46.565943   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:41:46.575481   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:41:46.575501   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:41:46.575549   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:41:46.585143   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:41:46.585202   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:41:46.595157   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:41:46.604539   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:41:46.604600   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:41:46.613345   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.622186   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:41:46.622242   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.631221   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:41:46.640649   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:41:46.640706   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:41:46.650161   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:41:46.724104   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:41:46.724182   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:41:46.860463   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:41:46.860606   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:41:46.860725   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:41:47.036697   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:41:47.038444   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:41:47.038561   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:41:47.038670   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:41:47.038775   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:41:47.038860   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:41:47.038973   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:41:47.039067   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:41:47.039172   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:41:47.039256   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:41:47.039359   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:41:47.039456   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:41:47.039516   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:41:47.039587   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:41:47.278696   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:41:47.664300   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:41:47.988137   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:41:48.076560   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:41:48.093447   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:41:48.094656   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:41:48.094793   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:41:48.253225   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:41:48.255034   80228 out.go:204]   - Booting up control plane ...
	I0814 17:41:48.255160   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:41:48.259041   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:41:48.260074   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:41:48.260862   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:41:48.262910   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:41:47.513415   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:50.012937   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:52.013499   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:54.514150   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:57.013146   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:59.013393   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:01.014185   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:01.441261   79871 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.187019598s)
	I0814 17:42:01.441333   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:01.457213   79871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:42:01.466802   79871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:42:01.475719   79871 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:42:01.475736   79871 kubeadm.go:157] found existing configuration files:
	
	I0814 17:42:01.475784   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 17:42:01.484555   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:42:01.484618   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:42:01.493956   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 17:42:01.503873   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:42:01.503923   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:42:01.514710   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 17:42:01.524473   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:42:01.524531   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:42:01.534749   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 17:42:01.544491   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:42:01.544558   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:42:01.555481   79871 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:42:01.599801   79871 kubeadm.go:310] W0814 17:42:01.575622    2598 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:01.600615   79871 kubeadm.go:310] W0814 17:42:01.576625    2598 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:01.703064   79871 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:42:03.513007   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:05.514241   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:09.627141   79871 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:42:09.627216   79871 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:42:09.627344   79871 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:42:09.627480   79871 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:42:09.627638   79871 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:42:09.627717   79871 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:42:09.629272   79871 out.go:204]   - Generating certificates and keys ...
	I0814 17:42:09.629370   79871 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:42:09.629472   79871 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:42:09.629592   79871 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:42:09.629712   79871 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:42:09.629780   79871 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:42:09.629826   79871 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:42:09.629898   79871 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:42:09.629963   79871 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:42:09.630076   79871 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:42:09.630198   79871 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:42:09.630253   79871 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:42:09.630314   79871 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:42:09.630357   79871 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:42:09.630412   79871 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:42:09.630457   79871 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:42:09.630509   79871 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:42:09.630560   79871 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:42:09.630629   79871 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:42:09.630688   79871 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:42:09.632664   79871 out.go:204]   - Booting up control plane ...
	I0814 17:42:09.632763   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:42:09.632878   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:42:09.632963   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:42:09.633100   79871 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:42:09.633207   79871 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:42:09.633252   79871 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:42:09.633412   79871 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:42:09.633542   79871 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:42:09.633624   79871 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004125702s
	I0814 17:42:09.633727   79871 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:42:09.633814   79871 kubeadm.go:310] [api-check] The API server is healthy after 4.501648596s
	I0814 17:42:09.633967   79871 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:42:09.634119   79871 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:42:09.634169   79871 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:42:09.634328   79871 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-885666 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:42:09.634400   79871 kubeadm.go:310] [bootstrap-token] Using token: 17ct2j.hazurgskaspe26qx
	I0814 17:42:09.635732   79871 out.go:204]   - Configuring RBAC rules ...
	I0814 17:42:09.635859   79871 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:42:09.635990   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:42:09.636141   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:42:09.636250   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:42:09.636347   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:42:09.636485   79871 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:42:09.636657   79871 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:42:09.636708   79871 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:42:09.636747   79871 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:42:09.636753   79871 kubeadm.go:310] 
	I0814 17:42:09.636813   79871 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:42:09.636835   79871 kubeadm.go:310] 
	I0814 17:42:09.636972   79871 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:42:09.636995   79871 kubeadm.go:310] 
	I0814 17:42:09.637029   79871 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:42:09.637120   79871 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:42:09.637185   79871 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:42:09.637195   79871 kubeadm.go:310] 
	I0814 17:42:09.637267   79871 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:42:09.637277   79871 kubeadm.go:310] 
	I0814 17:42:09.637315   79871 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:42:09.637321   79871 kubeadm.go:310] 
	I0814 17:42:09.637384   79871 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:42:09.637461   79871 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:42:09.637523   79871 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:42:09.637529   79871 kubeadm.go:310] 
	I0814 17:42:09.637623   79871 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:42:09.637691   79871 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:42:09.637703   79871 kubeadm.go:310] 
	I0814 17:42:09.637779   79871 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 17ct2j.hazurgskaspe26qx \
	I0814 17:42:09.637866   79871 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:42:09.637890   79871 kubeadm.go:310] 	--control-plane 
	I0814 17:42:09.637899   79871 kubeadm.go:310] 
	I0814 17:42:09.638010   79871 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:42:09.638020   79871 kubeadm.go:310] 
	I0814 17:42:09.638098   79871 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 17ct2j.hazurgskaspe26qx \
	I0814 17:42:09.638211   79871 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:42:09.638234   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:42:09.638246   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:42:09.639748   79871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:42:09.641031   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:42:09.652173   79871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:42:09.670482   79871 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:42:09.670582   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:09.670582   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-885666 minikube.k8s.io/updated_at=2024_08_14T17_42_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=default-k8s-diff-port-885666 minikube.k8s.io/primary=true
	I0814 17:42:09.703097   79871 ops.go:34] apiserver oom_adj: -16
	I0814 17:42:09.881340   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:10.381470   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:07.516539   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:10.015456   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:10.882013   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:11.382239   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:11.881638   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:12.381703   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:12.881401   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:13.381524   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:13.881402   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:14.381696   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:14.498441   79871 kubeadm.go:1113] duration metric: took 4.827929439s to wait for elevateKubeSystemPrivileges
	I0814 17:42:14.498474   79871 kubeadm.go:394] duration metric: took 4m59.336328921s to StartCluster
	I0814 17:42:14.498493   79871 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:42:14.498581   79871 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:42:14.501029   79871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:42:14.501309   79871 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:42:14.501432   79871 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:42:14.501508   79871 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501541   79871 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.501550   79871 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:42:14.501577   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.501590   79871 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:42:14.501619   79871 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501667   79871 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.501677   79871 addons.go:243] addon metrics-server should already be in state true
	I0814 17:42:14.501716   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.501593   79871 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501840   79871 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-885666"
	I0814 17:42:14.502106   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502142   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502160   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502174   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502176   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502199   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502371   79871 out.go:177] * Verifying Kubernetes components...
	I0814 17:42:14.504085   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:42:14.519401   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0814 17:42:14.519631   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0814 17:42:14.520085   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.520196   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.520701   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.520722   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.520789   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0814 17:42:14.520978   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.520994   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.521255   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.521519   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.521524   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.521703   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.522021   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.522051   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.522548   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.522572   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.522864   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.523507   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.523550   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.525737   79871 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.525759   79871 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:42:14.525789   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.526144   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.526170   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.538930   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0814 17:42:14.538995   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42207
	I0814 17:42:14.539567   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.539594   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.540125   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.540138   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.540266   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.540289   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.540624   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.540770   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.540825   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.540970   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.542540   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.542848   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.544439   79871 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:42:14.544444   79871 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:42:14.544881   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0814 17:42:14.545315   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.545575   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:42:14.545586   79871 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:42:14.545601   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.545672   79871 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:42:14.545691   79871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:42:14.545708   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.545750   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.545759   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.546339   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.547167   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.547290   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.549794   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.549829   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550300   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.550324   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.550423   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550637   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.550707   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.550965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.551025   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.551119   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.551168   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.551302   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.551658   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.567203   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37661
	I0814 17:42:14.567613   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.568141   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.568167   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.568484   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.568678   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.570337   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.570867   79871 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:42:14.570888   79871 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:42:14.570906   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.574091   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.574562   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.574586   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.574667   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.574857   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.575039   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.575197   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.675594   79871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:42:14.694520   79871 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-885666" to be "Ready" ...
	I0814 17:42:14.702650   79871 node_ready.go:49] node "default-k8s-diff-port-885666" has status "Ready":"True"
	I0814 17:42:14.702672   79871 node_ready.go:38] duration metric: took 8.119351ms for node "default-k8s-diff-port-885666" to be "Ready" ...
	I0814 17:42:14.702684   79871 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:14.707535   79871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:14.762686   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:42:14.805275   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:42:14.837118   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:42:14.837143   79871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:42:14.881848   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:42:14.881872   79871 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:42:14.902731   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.902762   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.903058   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.903076   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.903092   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.903111   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.903448   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:14.903484   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.903493   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.908980   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.908995   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.909239   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:14.909310   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.909336   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.920224   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:42:14.920249   79871 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:42:14.955256   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:42:15.297167   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.297190   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.297544   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:15.297602   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.297631   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.297649   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.297659   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.297865   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.297885   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.842348   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.842376   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.842688   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.842707   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.842716   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.842738   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:15.842805   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.843057   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.843070   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.843081   79871 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-885666"
	I0814 17:42:15.844747   79871 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0814 17:42:12.513055   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:14.514298   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:15.845895   79871 addons.go:510] duration metric: took 1.344461878s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0814 17:42:16.714014   79871 pod_ready.go:102] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:18.715243   79871 pod_ready.go:102] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:17.013231   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:19.013966   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:20.507978   79367 pod_ready.go:81] duration metric: took 4m0.001138158s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" ...
	E0814 17:42:20.508026   79367 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 17:42:20.508048   79367 pod_ready.go:38] duration metric: took 4m6.305785273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:20.508081   79367 kubeadm.go:597] duration metric: took 4m13.455842043s to restartPrimaryControlPlane
	W0814 17:42:20.508145   79367 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:42:20.508186   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:42:20.714660   79871 pod_ready.go:92] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.714687   79871 pod_ready.go:81] duration metric: took 6.007129076s for pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.714696   79871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.719517   79871 pod_ready.go:92] pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.719542   79871 pod_ready.go:81] duration metric: took 4.838754ms for pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.719554   79871 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.724787   79871 pod_ready.go:92] pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.724816   79871 pod_ready.go:81] duration metric: took 5.250194ms for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.724834   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.731431   79871 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.731456   79871 pod_ready.go:81] duration metric: took 1.00661383s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.731468   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.736442   79871 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.736467   79871 pod_ready.go:81] duration metric: took 4.989787ms for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.736480   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-254cb" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.911495   79871 pod_ready.go:92] pod "kube-proxy-254cb" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.911520   79871 pod_ready.go:81] duration metric: took 175.03218ms for pod "kube-proxy-254cb" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.911529   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:22.311700   79871 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:22.311730   79871 pod_ready.go:81] duration metric: took 400.194781ms for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:22.311739   79871 pod_ready.go:38] duration metric: took 7.609043377s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:22.311752   79871 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:42:22.311800   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:42:22.326995   79871 api_server.go:72] duration metric: took 7.825649112s to wait for apiserver process to appear ...
	I0814 17:42:22.327018   79871 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:42:22.327036   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:42:22.331069   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 200:
	ok
	I0814 17:42:22.332077   79871 api_server.go:141] control plane version: v1.31.0
	I0814 17:42:22.332096   79871 api_server.go:131] duration metric: took 5.0724ms to wait for apiserver health ...
	I0814 17:42:22.332103   79871 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:42:22.514565   79871 system_pods.go:59] 9 kube-system pods found
	I0814 17:42:22.514595   79871 system_pods.go:61] "coredns-6f6b679f8f-k5qnj" [cf05f7e2-29de-4437-b182-53cd65350631] Running
	I0814 17:42:22.514601   79871 system_pods.go:61] "coredns-6f6b679f8f-nm28w" [ba1fe4d0-1869-49ec-a281-18119a2ad26b] Running
	I0814 17:42:22.514606   79871 system_pods.go:61] "etcd-default-k8s-diff-port-885666" [62581194-9ace-41f9-ba0d-0df04b7dca41] Running
	I0814 17:42:22.514610   79871 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-885666" [ea586a7b-5ca4-48d7-8be3-c13770e0cb40] Running
	I0814 17:42:22.514614   79871 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-885666" [9610bcca-feef-45f2-8b36-a6e96d364e18] Running
	I0814 17:42:22.514617   79871 system_pods.go:61] "kube-proxy-254cb" [e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb] Running
	I0814 17:42:22.514620   79871 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-885666" [872997ac-b438-4be5-b187-af171228770c] Running
	I0814 17:42:22.514626   79871 system_pods.go:61] "metrics-server-6867b74b74-5q86r" [849df692-9f8e-455e-b209-25801151513b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:42:22.514631   79871 system_pods.go:61] "storage-provisioner" [5128eea6-234c-4aea-a9b7-835e840a31a3] Running
	I0814 17:42:22.514639   79871 system_pods.go:74] duration metric: took 182.531543ms to wait for pod list to return data ...
	I0814 17:42:22.514647   79871 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:42:22.713101   79871 default_sa.go:45] found service account: "default"
	I0814 17:42:22.713125   79871 default_sa.go:55] duration metric: took 198.471814ms for default service account to be created ...
	I0814 17:42:22.713136   79871 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:42:22.914576   79871 system_pods.go:86] 9 kube-system pods found
	I0814 17:42:22.914619   79871 system_pods.go:89] "coredns-6f6b679f8f-k5qnj" [cf05f7e2-29de-4437-b182-53cd65350631] Running
	I0814 17:42:22.914628   79871 system_pods.go:89] "coredns-6f6b679f8f-nm28w" [ba1fe4d0-1869-49ec-a281-18119a2ad26b] Running
	I0814 17:42:22.914635   79871 system_pods.go:89] "etcd-default-k8s-diff-port-885666" [62581194-9ace-41f9-ba0d-0df04b7dca41] Running
	I0814 17:42:22.914643   79871 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-885666" [ea586a7b-5ca4-48d7-8be3-c13770e0cb40] Running
	I0814 17:42:22.914650   79871 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-885666" [9610bcca-feef-45f2-8b36-a6e96d364e18] Running
	I0814 17:42:22.914657   79871 system_pods.go:89] "kube-proxy-254cb" [e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb] Running
	I0814 17:42:22.914665   79871 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-885666" [872997ac-b438-4be5-b187-af171228770c] Running
	I0814 17:42:22.914678   79871 system_pods.go:89] "metrics-server-6867b74b74-5q86r" [849df692-9f8e-455e-b209-25801151513b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:42:22.914689   79871 system_pods.go:89] "storage-provisioner" [5128eea6-234c-4aea-a9b7-835e840a31a3] Running
	I0814 17:42:22.914705   79871 system_pods.go:126] duration metric: took 201.563199ms to wait for k8s-apps to be running ...
	I0814 17:42:22.914716   79871 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:42:22.914768   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:22.928499   79871 system_svc.go:56] duration metric: took 13.774119ms WaitForService to wait for kubelet
	I0814 17:42:22.928525   79871 kubeadm.go:582] duration metric: took 8.427183796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:42:22.928543   79871 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:42:23.112363   79871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:42:23.112398   79871 node_conditions.go:123] node cpu capacity is 2
	I0814 17:42:23.112410   79871 node_conditions.go:105] duration metric: took 183.861382ms to run NodePressure ...
	I0814 17:42:23.112423   79871 start.go:241] waiting for startup goroutines ...
	I0814 17:42:23.112432   79871 start.go:246] waiting for cluster config update ...
	I0814 17:42:23.112446   79871 start.go:255] writing updated cluster config ...
	I0814 17:42:23.112792   79871 ssh_runner.go:195] Run: rm -f paused
	I0814 17:42:23.162700   79871 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:42:23.164689   79871 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-885666" cluster and "default" namespace by default
	I0814 17:42:28.263217   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:42:28.263629   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:28.263853   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:33.264169   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:33.264403   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:43.264648   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:43.264858   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:46.859569   79367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.351355314s)
	I0814 17:42:46.859653   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:46.875530   79367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:42:46.884772   79367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:42:46.894185   79367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:42:46.894208   79367 kubeadm.go:157] found existing configuration files:
	
	I0814 17:42:46.894258   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:42:46.903690   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:42:46.903748   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:42:46.913277   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:42:46.922120   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:42:46.922173   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:42:46.931143   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:42:46.939936   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:42:46.939997   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:42:46.949257   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:42:46.958109   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:42:46.958169   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:42:46.967609   79367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:42:47.010119   79367 kubeadm.go:310] W0814 17:42:46.983769    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:47.010889   79367 kubeadm.go:310] W0814 17:42:46.984558    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:47.122746   79367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:42:55.571963   79367 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:42:55.572017   79367 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:42:55.572127   79367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:42:55.572236   79367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:42:55.572314   79367 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:42:55.572385   79367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:42:55.574178   79367 out.go:204]   - Generating certificates and keys ...
	I0814 17:42:55.574288   79367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:42:55.574372   79367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:42:55.574485   79367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:42:55.574573   79367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:42:55.574669   79367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:42:55.574740   79367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:42:55.574811   79367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:42:55.574909   79367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:42:55.575014   79367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:42:55.575135   79367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:42:55.575187   79367 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:42:55.575238   79367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:42:55.575288   79367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:42:55.575359   79367 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:42:55.575438   79367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:42:55.575521   79367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:42:55.575608   79367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:42:55.575759   79367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:42:55.575869   79367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:42:55.577331   79367 out.go:204]   - Booting up control plane ...
	I0814 17:42:55.577429   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:42:55.577511   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:42:55.577587   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:42:55.577773   79367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:42:55.577908   79367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:42:55.577968   79367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:42:55.578152   79367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:42:55.578301   79367 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:42:55.578368   79367 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.938552ms
	I0814 17:42:55.578428   79367 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:42:55.578480   79367 kubeadm.go:310] [api-check] The API server is healthy after 5.00239154s
	I0814 17:42:55.578605   79367 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:42:55.578777   79367 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:42:55.578863   79367 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:42:55.579025   79367 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-545149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:42:55.579100   79367 kubeadm.go:310] [bootstrap-token] Using token: qzd0yh.k8a8j7f6vmqndeav
	I0814 17:42:55.580318   79367 out.go:204]   - Configuring RBAC rules ...
	I0814 17:42:55.580429   79367 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:42:55.580503   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:42:55.580683   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:42:55.580839   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:42:55.580935   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:42:55.581047   79367 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:42:55.581197   79367 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:42:55.581235   79367 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:42:55.581279   79367 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:42:55.581285   79367 kubeadm.go:310] 
	I0814 17:42:55.581339   79367 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:42:55.581355   79367 kubeadm.go:310] 
	I0814 17:42:55.581470   79367 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:42:55.581480   79367 kubeadm.go:310] 
	I0814 17:42:55.581519   79367 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:42:55.581586   79367 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:42:55.581654   79367 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:42:55.581663   79367 kubeadm.go:310] 
	I0814 17:42:55.581749   79367 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:42:55.581757   79367 kubeadm.go:310] 
	I0814 17:42:55.581798   79367 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:42:55.581804   79367 kubeadm.go:310] 
	I0814 17:42:55.581857   79367 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:42:55.581944   79367 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:42:55.582007   79367 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:42:55.582014   79367 kubeadm.go:310] 
	I0814 17:42:55.582085   79367 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:42:55.582148   79367 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:42:55.582154   79367 kubeadm.go:310] 
	I0814 17:42:55.582221   79367 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qzd0yh.k8a8j7f6vmqndeav \
	I0814 17:42:55.582313   79367 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:42:55.582333   79367 kubeadm.go:310] 	--control-plane 
	I0814 17:42:55.582336   79367 kubeadm.go:310] 
	I0814 17:42:55.582426   79367 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:42:55.582434   79367 kubeadm.go:310] 
	I0814 17:42:55.582518   79367 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qzd0yh.k8a8j7f6vmqndeav \
	I0814 17:42:55.582678   79367 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:42:55.582691   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:42:55.582697   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:42:55.584337   79367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:42:55.585493   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:42:55.596201   79367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:42:55.617012   79367 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:42:55.617115   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:55.617152   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-545149 minikube.k8s.io/updated_at=2024_08_14T17_42_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=no-preload-545149 minikube.k8s.io/primary=true
	I0814 17:42:55.794262   79367 ops.go:34] apiserver oom_adj: -16
	I0814 17:42:55.794421   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:56.294450   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:56.795280   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:57.294604   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:57.794700   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:58.294863   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:58.795404   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:59.295066   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:59.794529   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:43:00.294720   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:43:00.409254   79367 kubeadm.go:1113] duration metric: took 4.79220609s to wait for elevateKubeSystemPrivileges
	I0814 17:43:00.409300   79367 kubeadm.go:394] duration metric: took 4m53.401266889s to StartCluster
	I0814 17:43:00.409323   79367 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:43:00.409419   79367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:43:00.411076   79367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:43:00.411313   79367 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:43:00.411438   79367 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:43:00.411521   79367 addons.go:69] Setting storage-provisioner=true in profile "no-preload-545149"
	I0814 17:43:00.411529   79367 addons.go:69] Setting default-storageclass=true in profile "no-preload-545149"
	I0814 17:43:00.411552   79367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-545149"
	I0814 17:43:00.411554   79367 addons.go:234] Setting addon storage-provisioner=true in "no-preload-545149"
	I0814 17:43:00.411564   79367 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:43:00.411568   79367 addons.go:69] Setting metrics-server=true in profile "no-preload-545149"
	W0814 17:43:00.411566   79367 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:43:00.411601   79367 addons.go:234] Setting addon metrics-server=true in "no-preload-545149"
	W0814 17:43:00.411612   79367 addons.go:243] addon metrics-server should already be in state true
	I0814 17:43:00.411637   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.411646   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.411922   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.411954   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412019   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.412053   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412076   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.412103   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412914   79367 out.go:177] * Verifying Kubernetes components...
	I0814 17:43:00.414471   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:43:00.427965   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0814 17:43:00.427966   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41043
	I0814 17:43:00.428460   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.428608   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.428985   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.429004   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.429130   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.429147   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.429206   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0814 17:43:00.429346   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.429443   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.429498   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.429543   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.430131   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.430152   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.430435   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.430446   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.430718   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.431238   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.431270   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.433273   79367 addons.go:234] Setting addon default-storageclass=true in "no-preload-545149"
	W0814 17:43:00.433292   79367 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:43:00.433319   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.433551   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.433581   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.450138   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I0814 17:43:00.450327   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I0814 17:43:00.450697   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.450818   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.451527   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.451547   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.451695   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.451706   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.451958   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.452224   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.452283   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.453110   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.453141   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.453937   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.455467   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I0814 17:43:00.455825   79367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:43:00.455934   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.456456   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.456479   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.456964   79367 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:43:00.456981   79367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:43:00.456999   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.457000   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.457144   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.459264   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.460208   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.460606   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.460636   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.460750   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.460858   79367 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:43:00.460989   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.461150   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.461281   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.462118   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:43:00.462138   79367 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:43:00.462156   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.465200   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.465643   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.465710   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.465829   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.466004   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.466165   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.466312   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.478054   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0814 17:43:00.478616   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.479176   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.479198   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.479536   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.479725   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.481351   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.481574   79367 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:43:00.481588   79367 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:43:00.481606   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.484454   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.484738   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.484771   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.484989   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.485222   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.485370   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.485485   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.617562   79367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:43:00.665134   79367 node_ready.go:35] waiting up to 6m0s for node "no-preload-545149" to be "Ready" ...
	I0814 17:43:00.673659   79367 node_ready.go:49] node "no-preload-545149" has status "Ready":"True"
	I0814 17:43:00.673680   79367 node_ready.go:38] duration metric: took 8.508683ms for node "no-preload-545149" to be "Ready" ...
	I0814 17:43:00.673689   79367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:43:00.680313   79367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:00.810401   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:43:00.827621   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:43:00.871727   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:43:00.871752   79367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:43:00.969061   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:43:00.969088   79367 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:43:01.103808   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:43:01.103839   79367 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:43:01.198160   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:43:01.880623   79367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.052957744s)
	I0814 17:43:01.880683   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.880697   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.880749   79367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070305568s)
	I0814 17:43:01.880785   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.880804   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881075   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881093   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881103   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.881115   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881248   79367 main.go:141] libmachine: (no-preload-545149) DBG | Closing plugin on server side
	I0814 17:43:01.881284   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881312   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881336   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.881375   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881385   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881396   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881682   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881703   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.896050   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.896076   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.896351   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.896370   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.131404   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:02.131427   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:02.131744   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:02.131768   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.131780   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:02.131788   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:02.132004   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:02.132026   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.132042   79367 addons.go:475] Verifying addon metrics-server=true in "no-preload-545149"
	I0814 17:43:02.133699   79367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 17:43:03.265508   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:03.265720   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:02.135365   79367 addons.go:510] duration metric: took 1.72392081s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 17:43:02.687160   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:05.186062   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:07.187193   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:09.188957   79367 pod_ready.go:92] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.188990   79367 pod_ready.go:81] duration metric: took 8.508650006s for pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.189003   79367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.194469   79367 pod_ready.go:92] pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.194492   79367 pod_ready.go:81] duration metric: took 5.48133ms for pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.194501   79367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.199127   79367 pod_ready.go:92] pod "etcd-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.199150   79367 pod_ready.go:81] duration metric: took 4.643296ms for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.199159   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.203804   79367 pod_ready.go:92] pod "kube-apiserver-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.203825   79367 pod_ready.go:81] duration metric: took 4.659513ms for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.203837   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.208443   79367 pod_ready.go:92] pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.208465   79367 pod_ready.go:81] duration metric: took 4.620634ms for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.208479   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6bps" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.584443   79367 pod_ready.go:92] pod "kube-proxy-s6bps" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.584471   79367 pod_ready.go:81] duration metric: took 375.985094ms for pod "kube-proxy-s6bps" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.584481   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.985476   79367 pod_ready.go:92] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.985504   79367 pod_ready.go:81] duration metric: took 401.014791ms for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.985515   79367 pod_ready.go:38] duration metric: took 9.311816641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:43:09.985534   79367 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:43:09.985603   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:43:10.002239   79367 api_server.go:72] duration metric: took 9.590875358s to wait for apiserver process to appear ...
	I0814 17:43:10.002276   79367 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:43:10.002304   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:43:10.009410   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I0814 17:43:10.010351   79367 api_server.go:141] control plane version: v1.31.0
	I0814 17:43:10.010381   79367 api_server.go:131] duration metric: took 8.098086ms to wait for apiserver health ...
	I0814 17:43:10.010389   79367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:43:10.189597   79367 system_pods.go:59] 9 kube-system pods found
	I0814 17:43:10.189629   79367 system_pods.go:61] "coredns-6f6b679f8f-h4dmc" [33f2fdca-15ba-430f-989f-3c569f33a76a] Running
	I0814 17:43:10.189634   79367 system_pods.go:61] "coredns-6f6b679f8f-mpfqf" [7b0e3bf4-41d9-4151-8255-37881e596c20] Running
	I0814 17:43:10.189638   79367 system_pods.go:61] "etcd-no-preload-545149" [5fc2782c-a4c3-46d6-b2d2-3c9325f17ae4] Running
	I0814 17:43:10.189642   79367 system_pods.go:61] "kube-apiserver-no-preload-545149" [3cdde3b9-70b4-4e5e-bc48-ab207c903437] Running
	I0814 17:43:10.189646   79367 system_pods.go:61] "kube-controller-manager-no-preload-545149" [c8f222c9-95a1-4acf-9ca3-068832ed808f] Running
	I0814 17:43:10.189649   79367 system_pods.go:61] "kube-proxy-s6bps" [9165c654-568f-4206-878c-f0c88ccd38cd] Running
	I0814 17:43:10.189652   79367 system_pods.go:61] "kube-scheduler-no-preload-545149" [423d82b6-cb92-408b-a5d6-95305c91400c] Running
	I0814 17:43:10.189658   79367 system_pods.go:61] "metrics-server-6867b74b74-7qljd" [0f0e5d07-eb28-46b3-9270-554006151eda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:43:10.189662   79367 system_pods.go:61] "storage-provisioner" [bc80ba99-eecf-4eb1-bd78-f88792cb3e5a] Running
	I0814 17:43:10.189670   79367 system_pods.go:74] duration metric: took 179.275641ms to wait for pod list to return data ...
	I0814 17:43:10.189678   79367 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:43:10.385690   79367 default_sa.go:45] found service account: "default"
	I0814 17:43:10.385715   79367 default_sa.go:55] duration metric: took 196.030333ms for default service account to be created ...
	I0814 17:43:10.385723   79367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:43:10.590237   79367 system_pods.go:86] 9 kube-system pods found
	I0814 17:43:10.590272   79367 system_pods.go:89] "coredns-6f6b679f8f-h4dmc" [33f2fdca-15ba-430f-989f-3c569f33a76a] Running
	I0814 17:43:10.590279   79367 system_pods.go:89] "coredns-6f6b679f8f-mpfqf" [7b0e3bf4-41d9-4151-8255-37881e596c20] Running
	I0814 17:43:10.590285   79367 system_pods.go:89] "etcd-no-preload-545149" [5fc2782c-a4c3-46d6-b2d2-3c9325f17ae4] Running
	I0814 17:43:10.590291   79367 system_pods.go:89] "kube-apiserver-no-preload-545149" [3cdde3b9-70b4-4e5e-bc48-ab207c903437] Running
	I0814 17:43:10.590299   79367 system_pods.go:89] "kube-controller-manager-no-preload-545149" [c8f222c9-95a1-4acf-9ca3-068832ed808f] Running
	I0814 17:43:10.590306   79367 system_pods.go:89] "kube-proxy-s6bps" [9165c654-568f-4206-878c-f0c88ccd38cd] Running
	I0814 17:43:10.590312   79367 system_pods.go:89] "kube-scheduler-no-preload-545149" [423d82b6-cb92-408b-a5d6-95305c91400c] Running
	I0814 17:43:10.590322   79367 system_pods.go:89] "metrics-server-6867b74b74-7qljd" [0f0e5d07-eb28-46b3-9270-554006151eda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:43:10.590335   79367 system_pods.go:89] "storage-provisioner" [bc80ba99-eecf-4eb1-bd78-f88792cb3e5a] Running
	I0814 17:43:10.590351   79367 system_pods.go:126] duration metric: took 204.620982ms to wait for k8s-apps to be running ...
	I0814 17:43:10.590364   79367 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:43:10.590418   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:10.605594   79367 system_svc.go:56] duration metric: took 15.223089ms WaitForService to wait for kubelet
	I0814 17:43:10.605624   79367 kubeadm.go:582] duration metric: took 10.194267533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:43:10.605644   79367 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:43:10.786127   79367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:43:10.786160   79367 node_conditions.go:123] node cpu capacity is 2
	I0814 17:43:10.786173   79367 node_conditions.go:105] duration metric: took 180.522994ms to run NodePressure ...
	I0814 17:43:10.786187   79367 start.go:241] waiting for startup goroutines ...
	I0814 17:43:10.786197   79367 start.go:246] waiting for cluster config update ...
	I0814 17:43:10.786210   79367 start.go:255] writing updated cluster config ...
	I0814 17:43:10.786498   79367 ssh_runner.go:195] Run: rm -f paused
	I0814 17:43:10.834139   79367 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:43:10.836315   79367 out.go:177] * Done! kubectl is now configured to use "no-preload-545149" cluster and "default" namespace by default
	I0814 17:43:43.267316   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:43.267596   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:43.267623   80228 kubeadm.go:310] 
	I0814 17:43:43.267680   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:43:43.267757   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:43:43.267778   80228 kubeadm.go:310] 
	I0814 17:43:43.267839   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:43:43.267894   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:43:43.268029   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:43:43.268044   80228 kubeadm.go:310] 
	I0814 17:43:43.268190   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:43:43.268247   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:43:43.268296   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:43:43.268305   80228 kubeadm.go:310] 
	I0814 17:43:43.268446   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:43:43.268561   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:43:43.268572   80228 kubeadm.go:310] 
	I0814 17:43:43.268741   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:43:43.268907   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:43:43.269021   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:43:43.269120   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:43:43.269131   80228 kubeadm.go:310] 
	I0814 17:43:43.269560   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:43:43.269642   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:43:43.269698   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 17:43:43.269809   80228 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 17:43:43.269853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:43:43.733975   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:43.748632   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:43:43.758474   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:43:43.758493   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:43:43.758543   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:43:43.767721   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:43:43.767777   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:43:43.777259   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:43:43.786562   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:43:43.786623   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:43:43.795253   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.803677   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:43:43.803733   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.812416   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:43:43.821020   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:43:43.821075   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:43:43.829709   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:43:44.024836   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:45:40.060126   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:45:40.060266   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 17:45:40.061931   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:45:40.062003   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:45:40.062110   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:45:40.062231   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:45:40.062360   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:45:40.062459   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:45:40.063940   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:45:40.064041   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:45:40.064124   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:45:40.064230   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:45:40.064305   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:45:40.064398   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:45:40.064471   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:45:40.064550   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:45:40.064632   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:45:40.064712   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:45:40.064798   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:45:40.064857   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:45:40.064913   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:45:40.064956   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:45:40.065040   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:45:40.065146   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:45:40.065222   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:45:40.065366   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:45:40.065490   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:45:40.065547   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:45:40.065648   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:45:40.068108   80228 out.go:204]   - Booting up control plane ...
	I0814 17:45:40.068211   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:45:40.068294   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:45:40.068395   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:45:40.068522   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:45:40.068675   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:45:40.068751   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:45:40.068843   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069048   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069141   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069393   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069510   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069756   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069823   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069982   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070051   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.070204   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070211   80228 kubeadm.go:310] 
	I0814 17:45:40.070244   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:45:40.070291   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:45:40.070299   80228 kubeadm.go:310] 
	I0814 17:45:40.070337   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:45:40.070379   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:45:40.070504   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:45:40.070521   80228 kubeadm.go:310] 
	I0814 17:45:40.070673   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:45:40.070723   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:45:40.070764   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:45:40.070776   80228 kubeadm.go:310] 
	I0814 17:45:40.070876   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:45:40.070945   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:45:40.070953   80228 kubeadm.go:310] 
	I0814 17:45:40.071045   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:45:40.071151   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:45:40.071246   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:45:40.071363   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:45:40.071453   80228 kubeadm.go:310] 
	I0814 17:45:40.071481   80228 kubeadm.go:394] duration metric: took 8m2.599023024s to StartCluster
	I0814 17:45:40.071554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:45:40.071617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:45:40.115691   80228 cri.go:89] found id: ""
	I0814 17:45:40.115719   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.115727   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:45:40.115734   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:45:40.115798   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:45:40.155537   80228 cri.go:89] found id: ""
	I0814 17:45:40.155566   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.155574   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:45:40.155580   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:45:40.155645   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:45:40.189570   80228 cri.go:89] found id: ""
	I0814 17:45:40.189604   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.189616   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:45:40.189625   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:45:40.189708   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:45:40.222496   80228 cri.go:89] found id: ""
	I0814 17:45:40.222521   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.222528   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:45:40.222533   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:45:40.222590   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:45:40.255095   80228 cri.go:89] found id: ""
	I0814 17:45:40.255129   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.255142   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:45:40.255151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:45:40.255233   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:45:40.290297   80228 cri.go:89] found id: ""
	I0814 17:45:40.290326   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.290341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:45:40.290348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:45:40.290396   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:45:40.326660   80228 cri.go:89] found id: ""
	I0814 17:45:40.326685   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.326695   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:45:40.326701   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:45:40.326764   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:45:40.359867   80228 cri.go:89] found id: ""
	I0814 17:45:40.359896   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.359908   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:45:40.359918   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:45:40.359933   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:45:40.397513   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:45:40.397543   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:45:40.451744   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:45:40.451778   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:45:40.475817   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:45:40.475843   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:45:40.575913   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:45:40.575933   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:45:40.575946   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0814 17:45:40.683455   80228 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 17:45:40.683509   80228 out.go:239] * 
	W0814 17:45:40.683587   80228 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.683623   80228 out.go:239] * 
	W0814 17:45:40.684431   80228 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 17:45:40.688043   80228 out.go:177] 
	W0814 17:45:40.689238   80228 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.689291   80228 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 17:45:40.689318   80228 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 17:45:40.690913   80228 out.go:177] 
	
	
	==> CRI-O <==
	Aug 14 17:54:45 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:45.941817565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658085941772537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b430c3a-5cd6-4df8-8053-22a4d7065ed8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:54:45 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:45.942406379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d50f4291-ec05-4ddb-b6d3-9831460af596 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:54:45 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:45.942509757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d50f4291-ec05-4ddb-b6d3-9831460af596 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:54:45 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:45.942573020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d50f4291-ec05-4ddb-b6d3-9831460af596 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:54:45 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:45.972421120Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=866e966d-d0c3-4b59-b5fc-5e8e740c7e87 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:54:45 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:45.972510247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=866e966d-d0c3-4b59-b5fc-5e8e740c7e87 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:54:45 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:45.973511865Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d1e4417-bbd7-4093-9b8b-49c1f024c89c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:54:45 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:45.973950054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658085973929154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d1e4417-bbd7-4093-9b8b-49c1f024c89c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:54:45 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:45.974448952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d453da2e-1be1-4b2e-9434-52891d01f9f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:54:45 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:45.974569686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d453da2e-1be1-4b2e-9434-52891d01f9f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:54:45 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:45.974614415Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d453da2e-1be1-4b2e-9434-52891d01f9f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.007437893Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2e0d4ae-caae-4e52-a555-20462f097798 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.007541702Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2e0d4ae-caae-4e52-a555-20462f097798 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.008980257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b9ad0ff-0c3c-43b3-a71f-4a52639c14d9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.009514066Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658086009478131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b9ad0ff-0c3c-43b3-a71f-4a52639c14d9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.010056190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfa019c3-956a-47ba-94be-ef1e7750262d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.010147704Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfa019c3-956a-47ba-94be-ef1e7750262d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.010198353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bfa019c3-956a-47ba-94be-ef1e7750262d name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.041084216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2fb6969b-3f2d-41c1-aa71-8f5146403134 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.041215157Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2fb6969b-3f2d-41c1-aa71-8f5146403134 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.042675077Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7be8c551-c51a-4ae4-8a0a-f600927e8519 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.043219121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658086043187466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7be8c551-c51a-4ae4-8a0a-f600927e8519 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.044011213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5853784-0227-4597-bacb-5cf1fba66217 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.044090458Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5853784-0227-4597-bacb-5cf1fba66217 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:54:46 old-k8s-version-505584 crio[648]: time="2024-08-14 17:54:46.044147889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b5853784-0227-4597-bacb-5cf1fba66217 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug14 17:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051751] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038545] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.928700] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.931842] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.538149] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.402686] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.068532] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066584] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.214010] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.127681] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.254794] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.216784] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.064759] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.847232] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[ +11.985584] kauditd_printk_skb: 46 callbacks suppressed
	[Aug14 17:41] systemd-fstab-generator[5130]: Ignoring "noauto" option for root device
	[Aug14 17:43] systemd-fstab-generator[5418]: Ignoring "noauto" option for root device
	[  +0.067751] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 17:54:46 up 17 min,  0 users,  load average: 0.08, 0.08, 0.07
	Linux old-k8s-version-505584 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000b6aa80, 0xc00009e0c0)
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: created by k8s.io/kubernetes/pkg/kubelet/config.newSourceApiserverFromLW
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47 +0x1e5
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: goroutine 146 [sync.Cond.Wait]:
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: runtime.goparkunlock(...)
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]:         /usr/local/go/src/runtime/proc.go:312
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: sync.runtime_notifyListWait(0xc000c7a208, 0x0)
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]:         /usr/local/go/src/runtime/sema.go:513 +0xf8
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: sync.(*Cond).Wait(0xc000c7a1f8)
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]:         /usr/local/go/src/sync/cond.go:56 +0x9d
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*DeltaFIFO).Pop(0xc000c7a1e0, 0xc000baded0, 0x0, 0x0, 0x0, 0x0)
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/delta_fifo.go:493 +0x98
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).processLoop(0xc000babcb0)
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:183 +0x42
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000ae7e68)
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000ae7e68, 0x4f0ac40, 0xc000d80360, 0xc000c46701, 0xc00009e0c0)
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000ae7e68, 0x3b9aca00, 0x0, 0xc000439901, 0xc00009e0c0)
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run(0xc000babcb0, 0xc00009e0c0)
	Aug 14 17:54:46 old-k8s-version-505584 kubelet[6609]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:154 +0x2e5
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505584 -n old-k8s-version-505584
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 2 (221.972896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-505584" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (444.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-309673 -n embed-certs-309673
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-14 17:57:54.939030448 +0000 UTC m=+6508.624313236
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-309673 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-309673 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.607µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-309673 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-309673 -n embed-certs-309673
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-309673 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-309673 logs -n 25: (1.244166637s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-984053 sudo find                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo crio                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-984053                                       | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-005029 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | disable-driver-mounts-005029                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:30 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-545149             | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-309673            | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-885666  | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC | 14 Aug 24 17:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC |                     |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-545149                  | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-505584        | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC | 14 Aug 24 17:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-309673                 | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-885666       | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:42 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-505584             | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC | 14 Aug 24 17:57 UTC |
	| start   | -p newest-cni-471541 --memory=2200 --alsologtostderr   | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC | 14 Aug 24 17:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-471541             | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC | 14 Aug 24 17:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-471541                                   | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 17:57:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 17:57:05.657506   86299 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:57:05.657782   86299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:57:05.657792   86299 out.go:304] Setting ErrFile to fd 2...
	I0814 17:57:05.657798   86299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:57:05.657998   86299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:57:05.658581   86299 out.go:298] Setting JSON to false
	I0814 17:57:05.659605   86299 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9570,"bootTime":1723648656,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:57:05.659662   86299 start.go:139] virtualization: kvm guest
	I0814 17:57:05.662552   86299 out.go:177] * [newest-cni-471541] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:57:05.663970   86299 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:57:05.663967   86299 notify.go:220] Checking for updates...
	I0814 17:57:05.665550   86299 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:57:05.666948   86299 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:57:05.668170   86299 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:57:05.669321   86299 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:57:05.670447   86299 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:57:05.671933   86299 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:57:05.672015   86299 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:57:05.672096   86299 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:57:05.672164   86299 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:57:05.707750   86299 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 17:57:05.708761   86299 start.go:297] selected driver: kvm2
	I0814 17:57:05.708778   86299 start.go:901] validating driver "kvm2" against <nil>
	I0814 17:57:05.708798   86299 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:57:05.709845   86299 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:57:05.709957   86299 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:57:05.724761   86299 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:57:05.724805   86299 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0814 17:57:05.724831   86299 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0814 17:57:05.725080   86299 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0814 17:57:05.725145   86299 cni.go:84] Creating CNI manager for ""
	I0814 17:57:05.725157   86299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:57:05.725164   86299 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 17:57:05.725216   86299 start.go:340] cluster config:
	{Name:newest-cni-471541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:57:05.725322   86299 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:57:05.727106   86299 out.go:177] * Starting "newest-cni-471541" primary control-plane node in "newest-cni-471541" cluster
	I0814 17:57:05.728058   86299 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:57:05.728087   86299 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:57:05.728093   86299 cache.go:56] Caching tarball of preloaded images
	I0814 17:57:05.728153   86299 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:57:05.728163   86299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 17:57:05.728246   86299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/config.json ...
	I0814 17:57:05.728261   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/config.json: {Name:mk84f144973bc92a6534aa2eb616796cf2d1d274 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:05.728380   86299 start.go:360] acquireMachinesLock for newest-cni-471541: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:57:05.728418   86299 start.go:364] duration metric: took 20.18µs to acquireMachinesLock for "newest-cni-471541"
	I0814 17:57:05.728434   86299 start.go:93] Provisioning new machine with config: &{Name:newest-cni-471541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:57:05.728481   86299 start.go:125] createHost starting for "" (driver="kvm2")
	I0814 17:57:05.729912   86299 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 17:57:05.730078   86299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:57:05.730119   86299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:57:05.744773   86299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I0814 17:57:05.745230   86299 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:57:05.745769   86299 main.go:141] libmachine: Using API Version  1
	I0814 17:57:05.745796   86299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:57:05.746130   86299 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:57:05.746294   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetMachineName
	I0814 17:57:05.746466   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:05.746619   86299 start.go:159] libmachine.API.Create for "newest-cni-471541" (driver="kvm2")
	I0814 17:57:05.746647   86299 client.go:168] LocalClient.Create starting
	I0814 17:57:05.746683   86299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem
	I0814 17:57:05.746721   86299 main.go:141] libmachine: Decoding PEM data...
	I0814 17:57:05.746737   86299 main.go:141] libmachine: Parsing certificate...
	I0814 17:57:05.746794   86299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem
	I0814 17:57:05.746813   86299 main.go:141] libmachine: Decoding PEM data...
	I0814 17:57:05.746826   86299 main.go:141] libmachine: Parsing certificate...
	I0814 17:57:05.746841   86299 main.go:141] libmachine: Running pre-create checks...
	I0814 17:57:05.746853   86299 main.go:141] libmachine: (newest-cni-471541) Calling .PreCreateCheck
	I0814 17:57:05.747139   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetConfigRaw
	I0814 17:57:05.747496   86299 main.go:141] libmachine: Creating machine...
	I0814 17:57:05.747509   86299 main.go:141] libmachine: (newest-cni-471541) Calling .Create
	I0814 17:57:05.747633   86299 main.go:141] libmachine: (newest-cni-471541) Creating KVM machine...
	I0814 17:57:05.748816   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found existing default KVM network
	I0814 17:57:05.749904   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:05.749762   86322 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:63:32:a0} reservation:<nil>}
	I0814 17:57:05.750737   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:05.750671   86322 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:90:b2:95} reservation:<nil>}
	I0814 17:57:05.751496   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:05.751434   86322 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:8e:13:0f} reservation:<nil>}
	I0814 17:57:05.752542   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:05.752449   86322 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000306fb0}
	I0814 17:57:05.752572   86299 main.go:141] libmachine: (newest-cni-471541) DBG | created network xml: 
	I0814 17:57:05.752596   86299 main.go:141] libmachine: (newest-cni-471541) DBG | <network>
	I0814 17:57:05.752609   86299 main.go:141] libmachine: (newest-cni-471541) DBG |   <name>mk-newest-cni-471541</name>
	I0814 17:57:05.752618   86299 main.go:141] libmachine: (newest-cni-471541) DBG |   <dns enable='no'/>
	I0814 17:57:05.752629   86299 main.go:141] libmachine: (newest-cni-471541) DBG |   
	I0814 17:57:05.752636   86299 main.go:141] libmachine: (newest-cni-471541) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0814 17:57:05.752645   86299 main.go:141] libmachine: (newest-cni-471541) DBG |     <dhcp>
	I0814 17:57:05.752652   86299 main.go:141] libmachine: (newest-cni-471541) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0814 17:57:05.752659   86299 main.go:141] libmachine: (newest-cni-471541) DBG |     </dhcp>
	I0814 17:57:05.752665   86299 main.go:141] libmachine: (newest-cni-471541) DBG |   </ip>
	I0814 17:57:05.752669   86299 main.go:141] libmachine: (newest-cni-471541) DBG |   
	I0814 17:57:05.752674   86299 main.go:141] libmachine: (newest-cni-471541) DBG | </network>
	I0814 17:57:05.752678   86299 main.go:141] libmachine: (newest-cni-471541) DBG | 
	I0814 17:57:05.757647   86299 main.go:141] libmachine: (newest-cni-471541) DBG | trying to create private KVM network mk-newest-cni-471541 192.168.72.0/24...
	I0814 17:57:05.826472   86299 main.go:141] libmachine: (newest-cni-471541) DBG | private KVM network mk-newest-cni-471541 192.168.72.0/24 created
	I0814 17:57:05.826543   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:05.826439   86322 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:57:05.826566   86299 main.go:141] libmachine: (newest-cni-471541) Setting up store path in /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541 ...
	I0814 17:57:05.826592   86299 main.go:141] libmachine: (newest-cni-471541) Building disk image from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 17:57:05.826634   86299 main.go:141] libmachine: (newest-cni-471541) Downloading /home/jenkins/minikube-integration/19446-13977/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 17:57:06.074297   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:06.074149   86322 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa...
	I0814 17:57:06.297180   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:06.297037   86322 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/newest-cni-471541.rawdisk...
	I0814 17:57:06.297218   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Writing magic tar header
	I0814 17:57:06.297238   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Writing SSH key tar header
	I0814 17:57:06.297259   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:06.297171   86322 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541 ...
	I0814 17:57:06.297336   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541
	I0814 17:57:06.297374   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines
	I0814 17:57:06.297390   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:57:06.297409   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977
	I0814 17:57:06.297418   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 17:57:06.297430   86299 main.go:141] libmachine: (newest-cni-471541) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541 (perms=drwx------)
	I0814 17:57:06.297443   86299 main.go:141] libmachine: (newest-cni-471541) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines (perms=drwxr-xr-x)
	I0814 17:57:06.297457   86299 main.go:141] libmachine: (newest-cni-471541) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube (perms=drwxr-xr-x)
	I0814 17:57:06.297469   86299 main.go:141] libmachine: (newest-cni-471541) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977 (perms=drwxrwxr-x)
	I0814 17:57:06.297483   86299 main.go:141] libmachine: (newest-cni-471541) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 17:57:06.297494   86299 main.go:141] libmachine: (newest-cni-471541) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 17:57:06.297506   86299 main.go:141] libmachine: (newest-cni-471541) Creating domain...
	I0814 17:57:06.297516   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home/jenkins
	I0814 17:57:06.297527   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home
	I0814 17:57:06.297535   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Skipping /home - not owner
	I0814 17:57:06.298862   86299 main.go:141] libmachine: (newest-cni-471541) define libvirt domain using xml: 
	I0814 17:57:06.298875   86299 main.go:141] libmachine: (newest-cni-471541) <domain type='kvm'>
	I0814 17:57:06.298883   86299 main.go:141] libmachine: (newest-cni-471541)   <name>newest-cni-471541</name>
	I0814 17:57:06.298889   86299 main.go:141] libmachine: (newest-cni-471541)   <memory unit='MiB'>2200</memory>
	I0814 17:57:06.298894   86299 main.go:141] libmachine: (newest-cni-471541)   <vcpu>2</vcpu>
	I0814 17:57:06.298898   86299 main.go:141] libmachine: (newest-cni-471541)   <features>
	I0814 17:57:06.298904   86299 main.go:141] libmachine: (newest-cni-471541)     <acpi/>
	I0814 17:57:06.298911   86299 main.go:141] libmachine: (newest-cni-471541)     <apic/>
	I0814 17:57:06.298916   86299 main.go:141] libmachine: (newest-cni-471541)     <pae/>
	I0814 17:57:06.298923   86299 main.go:141] libmachine: (newest-cni-471541)     
	I0814 17:57:06.298928   86299 main.go:141] libmachine: (newest-cni-471541)   </features>
	I0814 17:57:06.298932   86299 main.go:141] libmachine: (newest-cni-471541)   <cpu mode='host-passthrough'>
	I0814 17:57:06.298941   86299 main.go:141] libmachine: (newest-cni-471541)   
	I0814 17:57:06.298957   86299 main.go:141] libmachine: (newest-cni-471541)   </cpu>
	I0814 17:57:06.298968   86299 main.go:141] libmachine: (newest-cni-471541)   <os>
	I0814 17:57:06.298981   86299 main.go:141] libmachine: (newest-cni-471541)     <type>hvm</type>
	I0814 17:57:06.298989   86299 main.go:141] libmachine: (newest-cni-471541)     <boot dev='cdrom'/>
	I0814 17:57:06.298999   86299 main.go:141] libmachine: (newest-cni-471541)     <boot dev='hd'/>
	I0814 17:57:06.299007   86299 main.go:141] libmachine: (newest-cni-471541)     <bootmenu enable='no'/>
	I0814 17:57:06.299017   86299 main.go:141] libmachine: (newest-cni-471541)   </os>
	I0814 17:57:06.299104   86299 main.go:141] libmachine: (newest-cni-471541)   <devices>
	I0814 17:57:06.299133   86299 main.go:141] libmachine: (newest-cni-471541)     <disk type='file' device='cdrom'>
	I0814 17:57:06.299147   86299 main.go:141] libmachine: (newest-cni-471541)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/boot2docker.iso'/>
	I0814 17:57:06.299162   86299 main.go:141] libmachine: (newest-cni-471541)       <target dev='hdc' bus='scsi'/>
	I0814 17:57:06.299171   86299 main.go:141] libmachine: (newest-cni-471541)       <readonly/>
	I0814 17:57:06.299176   86299 main.go:141] libmachine: (newest-cni-471541)     </disk>
	I0814 17:57:06.299181   86299 main.go:141] libmachine: (newest-cni-471541)     <disk type='file' device='disk'>
	I0814 17:57:06.299189   86299 main.go:141] libmachine: (newest-cni-471541)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 17:57:06.299203   86299 main.go:141] libmachine: (newest-cni-471541)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/newest-cni-471541.rawdisk'/>
	I0814 17:57:06.299211   86299 main.go:141] libmachine: (newest-cni-471541)       <target dev='hda' bus='virtio'/>
	I0814 17:57:06.299216   86299 main.go:141] libmachine: (newest-cni-471541)     </disk>
	I0814 17:57:06.299223   86299 main.go:141] libmachine: (newest-cni-471541)     <interface type='network'>
	I0814 17:57:06.299229   86299 main.go:141] libmachine: (newest-cni-471541)       <source network='mk-newest-cni-471541'/>
	I0814 17:57:06.299241   86299 main.go:141] libmachine: (newest-cni-471541)       <model type='virtio'/>
	I0814 17:57:06.299249   86299 main.go:141] libmachine: (newest-cni-471541)     </interface>
	I0814 17:57:06.299253   86299 main.go:141] libmachine: (newest-cni-471541)     <interface type='network'>
	I0814 17:57:06.299261   86299 main.go:141] libmachine: (newest-cni-471541)       <source network='default'/>
	I0814 17:57:06.299267   86299 main.go:141] libmachine: (newest-cni-471541)       <model type='virtio'/>
	I0814 17:57:06.299274   86299 main.go:141] libmachine: (newest-cni-471541)     </interface>
	I0814 17:57:06.299279   86299 main.go:141] libmachine: (newest-cni-471541)     <serial type='pty'>
	I0814 17:57:06.299285   86299 main.go:141] libmachine: (newest-cni-471541)       <target port='0'/>
	I0814 17:57:06.299290   86299 main.go:141] libmachine: (newest-cni-471541)     </serial>
	I0814 17:57:06.299297   86299 main.go:141] libmachine: (newest-cni-471541)     <console type='pty'>
	I0814 17:57:06.299303   86299 main.go:141] libmachine: (newest-cni-471541)       <target type='serial' port='0'/>
	I0814 17:57:06.299314   86299 main.go:141] libmachine: (newest-cni-471541)     </console>
	I0814 17:57:06.299319   86299 main.go:141] libmachine: (newest-cni-471541)     <rng model='virtio'>
	I0814 17:57:06.299344   86299 main.go:141] libmachine: (newest-cni-471541)       <backend model='random'>/dev/random</backend>
	I0814 17:57:06.299358   86299 main.go:141] libmachine: (newest-cni-471541)     </rng>
	I0814 17:57:06.299376   86299 main.go:141] libmachine: (newest-cni-471541)     
	I0814 17:57:06.299389   86299 main.go:141] libmachine: (newest-cni-471541)     
	I0814 17:57:06.299399   86299 main.go:141] libmachine: (newest-cni-471541)   </devices>
	I0814 17:57:06.299410   86299 main.go:141] libmachine: (newest-cni-471541) </domain>
	I0814 17:57:06.299420   86299 main.go:141] libmachine: (newest-cni-471541) 
	I0814 17:57:06.303763   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:84:ea:86 in network default
	I0814 17:57:06.304293   86299 main.go:141] libmachine: (newest-cni-471541) Ensuring networks are active...
	I0814 17:57:06.304318   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:06.305003   86299 main.go:141] libmachine: (newest-cni-471541) Ensuring network default is active
	I0814 17:57:06.305448   86299 main.go:141] libmachine: (newest-cni-471541) Ensuring network mk-newest-cni-471541 is active
	I0814 17:57:06.306017   86299 main.go:141] libmachine: (newest-cni-471541) Getting domain xml...
	I0814 17:57:06.306811   86299 main.go:141] libmachine: (newest-cni-471541) Creating domain...
	I0814 17:57:07.577610   86299 main.go:141] libmachine: (newest-cni-471541) Waiting to get IP...
	I0814 17:57:07.578406   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:07.578822   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:07.578853   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:07.578804   86322 retry.go:31] will retry after 192.490018ms: waiting for machine to come up
	I0814 17:57:07.773297   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:07.773800   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:07.773827   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:07.773757   86322 retry.go:31] will retry after 331.531479ms: waiting for machine to come up
	I0814 17:57:08.107381   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:08.107813   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:08.107832   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:08.107782   86322 retry.go:31] will retry after 443.490585ms: waiting for machine to come up
	I0814 17:57:08.552505   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:08.553075   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:08.553108   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:08.553010   86322 retry.go:31] will retry after 597.669641ms: waiting for machine to come up
	I0814 17:57:09.152293   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:09.152748   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:09.152779   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:09.152702   86322 retry.go:31] will retry after 728.666666ms: waiting for machine to come up
	I0814 17:57:09.882516   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:09.882939   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:09.882969   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:09.882884   86322 retry.go:31] will retry after 681.482968ms: waiting for machine to come up
	I0814 17:57:10.565460   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:10.565874   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:10.565905   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:10.565828   86322 retry.go:31] will retry after 1.190044961s: waiting for machine to come up
	I0814 17:57:11.758291   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:11.758824   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:11.758851   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:11.758775   86322 retry.go:31] will retry after 1.16384016s: waiting for machine to come up
	I0814 17:57:12.924081   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:12.924517   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:12.924539   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:12.924482   86322 retry.go:31] will retry after 1.365508056s: waiting for machine to come up
	I0814 17:57:14.292166   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:14.292626   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:14.292645   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:14.292603   86322 retry.go:31] will retry after 1.879924239s: waiting for machine to come up
	I0814 17:57:16.174619   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:16.175097   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:16.175128   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:16.175054   86322 retry.go:31] will retry after 2.741925753s: waiting for machine to come up
	I0814 17:57:18.919315   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:18.919832   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:18.919856   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:18.919796   86322 retry.go:31] will retry after 2.97592505s: waiting for machine to come up
	I0814 17:57:21.897443   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:21.897938   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:21.897961   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:21.897889   86322 retry.go:31] will retry after 3.312414184s: waiting for machine to come up
	I0814 17:57:25.213217   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.213827   86299 main.go:141] libmachine: (newest-cni-471541) Found IP for machine: 192.168.72.111
	I0814 17:57:25.213848   86299 main.go:141] libmachine: (newest-cni-471541) Reserving static IP address...
	I0814 17:57:25.213860   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has current primary IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.214217   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find host DHCP lease matching {name: "newest-cni-471541", mac: "52:54:00:ae:15:ce", ip: "192.168.72.111"} in network mk-newest-cni-471541
	I0814 17:57:25.290900   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Getting to WaitForSSH function...
	I0814 17:57:25.290920   86299 main.go:141] libmachine: (newest-cni-471541) Reserved static IP address: 192.168.72.111
	I0814 17:57:25.290930   86299 main.go:141] libmachine: (newest-cni-471541) Waiting for SSH to be available...
	I0814 17:57:25.293509   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.293998   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.294027   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.294199   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Using SSH client type: external
	I0814 17:57:25.294224   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa (-rw-------)
	I0814 17:57:25.294251   86299 main.go:141] libmachine: (newest-cni-471541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:57:25.294263   86299 main.go:141] libmachine: (newest-cni-471541) DBG | About to run SSH command:
	I0814 17:57:25.294273   86299 main.go:141] libmachine: (newest-cni-471541) DBG | exit 0
	I0814 17:57:25.419450   86299 main.go:141] libmachine: (newest-cni-471541) DBG | SSH cmd err, output: <nil>: 
	I0814 17:57:25.419760   86299 main.go:141] libmachine: (newest-cni-471541) KVM machine creation complete!
	I0814 17:57:25.420099   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetConfigRaw
	I0814 17:57:25.420562   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:25.420751   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:25.420946   86299 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 17:57:25.420960   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:57:25.422359   86299 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 17:57:25.422372   86299 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 17:57:25.422378   86299 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 17:57:25.422384   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:25.424518   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.424903   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.424928   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.425077   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:25.425287   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.425460   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.425590   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:25.425811   86299 main.go:141] libmachine: Using SSH client type: native
	I0814 17:57:25.426030   86299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:57:25.426041   86299 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 17:57:25.526484   86299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:57:25.526510   86299 main.go:141] libmachine: Detecting the provisioner...
	I0814 17:57:25.526537   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:25.529297   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.529690   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.529712   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.529967   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:25.530134   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.530265   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.530383   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:25.530591   86299 main.go:141] libmachine: Using SSH client type: native
	I0814 17:57:25.530815   86299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:57:25.530838   86299 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 17:57:25.631733   86299 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 17:57:25.631838   86299 main.go:141] libmachine: found compatible host: buildroot
	I0814 17:57:25.631853   86299 main.go:141] libmachine: Provisioning with buildroot...
	I0814 17:57:25.631862   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetMachineName
	I0814 17:57:25.632114   86299 buildroot.go:166] provisioning hostname "newest-cni-471541"
	I0814 17:57:25.632140   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetMachineName
	I0814 17:57:25.632316   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:25.635248   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.635704   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.635747   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.635893   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:25.636105   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.636292   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.636429   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:25.636624   86299 main.go:141] libmachine: Using SSH client type: native
	I0814 17:57:25.636819   86299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:57:25.636833   86299 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-471541 && echo "newest-cni-471541" | sudo tee /etc/hostname
	I0814 17:57:25.753175   86299 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-471541
	
	I0814 17:57:25.753201   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:25.755722   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.756081   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.756110   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.756322   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:25.756495   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.756649   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.756752   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:25.756885   86299 main.go:141] libmachine: Using SSH client type: native
	I0814 17:57:25.757089   86299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:57:25.757124   86299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-471541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-471541/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-471541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:57:25.867757   86299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:57:25.867793   86299 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:57:25.867853   86299 buildroot.go:174] setting up certificates
	I0814 17:57:25.867872   86299 provision.go:84] configureAuth start
	I0814 17:57:25.867890   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetMachineName
	I0814 17:57:25.868202   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetIP
	I0814 17:57:25.870840   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.871196   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.871223   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.871364   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:25.873405   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.873732   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.873759   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.873928   86299 provision.go:143] copyHostCerts
	I0814 17:57:25.873996   86299 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:57:25.874010   86299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:57:25.874092   86299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:57:25.874181   86299 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:57:25.874189   86299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:57:25.874215   86299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:57:25.874281   86299 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:57:25.874290   86299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:57:25.874312   86299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:57:25.874379   86299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.newest-cni-471541 san=[127.0.0.1 192.168.72.111 localhost minikube newest-cni-471541]
	I0814 17:57:25.996425   86299 provision.go:177] copyRemoteCerts
	I0814 17:57:25.996483   86299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:57:25.996506   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:25.999060   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.999458   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.999485   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.999651   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:25.999848   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.000089   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:26.000226   86299 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:57:26.081077   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:57:26.106955   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:57:26.131893   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:57:26.156123   86299 provision.go:87] duration metric: took 288.234058ms to configureAuth
	I0814 17:57:26.156159   86299 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:57:26.156391   86299 config.go:182] Loaded profile config "newest-cni-471541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:57:26.156472   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:26.159434   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.159811   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.159861   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.160010   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:26.160224   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.160386   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.160557   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:26.160764   86299 main.go:141] libmachine: Using SSH client type: native
	I0814 17:57:26.161002   86299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:57:26.161029   86299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:57:26.420224   86299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:57:26.420256   86299 main.go:141] libmachine: Checking connection to Docker...
	I0814 17:57:26.420267   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetURL
	I0814 17:57:26.421520   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Using libvirt version 6000000
	I0814 17:57:26.424041   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.424331   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.424366   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.424536   86299 main.go:141] libmachine: Docker is up and running!
	I0814 17:57:26.424548   86299 main.go:141] libmachine: Reticulating splines...
	I0814 17:57:26.424554   86299 client.go:171] duration metric: took 20.677897664s to LocalClient.Create
	I0814 17:57:26.424576   86299 start.go:167] duration metric: took 20.677957595s to libmachine.API.Create "newest-cni-471541"
	I0814 17:57:26.424587   86299 start.go:293] postStartSetup for "newest-cni-471541" (driver="kvm2")
	I0814 17:57:26.424596   86299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:57:26.424608   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:26.424862   86299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:57:26.424891   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:26.427017   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.427490   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.427515   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.427708   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:26.427885   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.428041   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:26.428171   86299 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:57:26.509446   86299 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:57:26.513557   86299 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:57:26.513583   86299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:57:26.513651   86299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:57:26.513748   86299 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:57:26.513844   86299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:57:26.526792   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:57:26.550156   86299 start.go:296] duration metric: took 125.558681ms for postStartSetup
	I0814 17:57:26.550202   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetConfigRaw
	I0814 17:57:26.550835   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetIP
	I0814 17:57:26.553916   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.554312   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.554345   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.554604   86299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/config.json ...
	I0814 17:57:26.554798   86299 start.go:128] duration metric: took 20.826306791s to createHost
	I0814 17:57:26.554824   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:26.557132   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.557546   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.557588   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.557767   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:26.557942   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.558124   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.558268   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:26.558473   86299 main.go:141] libmachine: Using SSH client type: native
	I0814 17:57:26.558646   86299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:57:26.558665   86299 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:57:26.659772   86299 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723658246.633049223
	
	I0814 17:57:26.659793   86299 fix.go:216] guest clock: 1723658246.633049223
	I0814 17:57:26.659801   86299 fix.go:229] Guest: 2024-08-14 17:57:26.633049223 +0000 UTC Remote: 2024-08-14 17:57:26.554810264 +0000 UTC m=+20.939172484 (delta=78.238959ms)
	I0814 17:57:26.659830   86299 fix.go:200] guest clock delta is within tolerance: 78.238959ms
	I0814 17:57:26.659835   86299 start.go:83] releasing machines lock for "newest-cni-471541", held for 20.931408514s
	I0814 17:57:26.659854   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:26.660128   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetIP
	I0814 17:57:26.662819   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.663199   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.663220   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.663480   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:26.664005   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:26.664235   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:26.664382   86299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:57:26.664432   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:26.664453   86299 ssh_runner.go:195] Run: cat /version.json
	I0814 17:57:26.664475   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:26.667238   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.667497   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.667573   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.667602   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.667748   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:26.667942   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.667987   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.668016   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.668093   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:26.668196   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:26.668266   86299 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:57:26.668510   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.668722   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:26.668892   86299 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:57:26.743956   86299 ssh_runner.go:195] Run: systemctl --version
	I0814 17:57:26.782159   86299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:57:26.944504   86299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:57:26.950905   86299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:57:26.950960   86299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:57:26.966247   86299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:57:26.966272   86299 start.go:495] detecting cgroup driver to use...
	I0814 17:57:26.966337   86299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:57:26.980854   86299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:57:26.993918   86299 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:57:26.993977   86299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:57:27.007239   86299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:57:27.020726   86299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:57:27.145122   86299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:57:27.287567   86299 docker.go:233] disabling docker service ...
	I0814 17:57:27.287640   86299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:57:27.305385   86299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:57:27.322269   86299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:57:27.464605   86299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:57:27.586777   86299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:57:27.600954   86299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:57:27.618663   86299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:57:27.618722   86299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.628397   86299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:57:27.628486   86299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.638355   86299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.649135   86299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.659398   86299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:57:27.669485   86299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.679429   86299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.695959   86299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.705972   86299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:57:27.714686   86299 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:57:27.714750   86299 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:57:27.726487   86299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:57:27.735247   86299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:57:27.856059   86299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:57:27.992082   86299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:57:27.992163   86299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:57:27.997308   86299 start.go:563] Will wait 60s for crictl version
	I0814 17:57:27.997357   86299 ssh_runner.go:195] Run: which crictl
	I0814 17:57:28.000861   86299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:57:28.038826   86299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:57:28.038906   86299 ssh_runner.go:195] Run: crio --version
	I0814 17:57:28.067028   86299 ssh_runner.go:195] Run: crio --version
	I0814 17:57:28.095352   86299 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:57:28.096650   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetIP
	I0814 17:57:28.099436   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:28.099778   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:28.099799   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:28.099986   86299 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:57:28.104054   86299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:57:28.117409   86299 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0814 17:57:28.118596   86299 kubeadm.go:883] updating cluster {Name:newest-cni-471541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:57:28.118731   86299 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:57:28.118804   86299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:57:28.150199   86299 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:57:28.150275   86299 ssh_runner.go:195] Run: which lz4
	I0814 17:57:28.154018   86299 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:57:28.157798   86299 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:57:28.157831   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:57:29.420372   86299 crio.go:462] duration metric: took 1.26637973s to copy over tarball
	I0814 17:57:29.420455   86299 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:57:31.480080   86299 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.059594465s)
	I0814 17:57:31.480116   86299 crio.go:469] duration metric: took 2.059711522s to extract the tarball
	I0814 17:57:31.480161   86299 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:57:31.518708   86299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:57:31.564587   86299 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:57:31.564608   86299 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:57:31.564615   86299 kubeadm.go:934] updating node { 192.168.72.111 8443 v1.31.0 crio true true} ...
	I0814 17:57:31.564708   86299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-471541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:57:31.564770   86299 ssh_runner.go:195] Run: crio config
	I0814 17:57:31.611368   86299 cni.go:84] Creating CNI manager for ""
	I0814 17:57:31.611386   86299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:57:31.611397   86299 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0814 17:57:31.611417   86299 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-471541 NodeName:newest-cni-471541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:57:31.611566   86299 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-471541"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:57:31.611626   86299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:57:31.620975   86299 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:57:31.621029   86299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:57:31.630694   86299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0814 17:57:31.647731   86299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:57:31.663961   86299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0814 17:57:31.679061   86299 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0814 17:57:31.682514   86299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:57:31.693658   86299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:57:31.815232   86299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:57:31.832616   86299 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541 for IP: 192.168.72.111
	I0814 17:57:31.832641   86299 certs.go:194] generating shared ca certs ...
	I0814 17:57:31.832657   86299 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:31.832804   86299 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:57:31.832846   86299 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:57:31.832856   86299 certs.go:256] generating profile certs ...
	I0814 17:57:31.832925   86299 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.key
	I0814 17:57:31.832939   86299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.crt with IP's: []
	I0814 17:57:32.014258   86299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.crt ...
	I0814 17:57:32.014289   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.crt: {Name:mk52b84d834b78123e55ca64dba1a8b4d8b898aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:32.014459   86299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.key ...
	I0814 17:57:32.014469   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.key: {Name:mk9567e6fc3d29715ca9a09dafb97350c0bceb29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:32.014549   86299 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key.5e517d6b
	I0814 17:57:32.014563   86299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt.5e517d6b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.111]
	I0814 17:57:32.164276   86299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt.5e517d6b ...
	I0814 17:57:32.164308   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt.5e517d6b: {Name:mkac8adafeddf6c4f1d680cb94be9d6c22597534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:32.164472   86299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key.5e517d6b ...
	I0814 17:57:32.164485   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key.5e517d6b: {Name:mke9139d583f3caeb8974b7b3c201343ee74e43e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:32.164554   86299 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt.5e517d6b -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt
	I0814 17:57:32.164664   86299 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key.5e517d6b -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key
	I0814 17:57:32.164719   86299 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.key
	I0814 17:57:32.164734   86299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.crt with IP's: []
	I0814 17:57:32.231890   86299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.crt ...
	I0814 17:57:32.231921   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.crt: {Name:mk2b2f1abb23d3529705151f176cdde77bf7fdac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:32.232077   86299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.key ...
	I0814 17:57:32.232092   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.key: {Name:mk2d72284b2ac40d1f8ebb8a9d06c28bb6e57547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:32.232263   86299 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:57:32.232298   86299 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:57:32.232308   86299 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:57:32.232329   86299 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:57:32.232350   86299 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:57:32.232374   86299 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:57:32.232410   86299 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:57:32.232983   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:57:32.256499   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:57:32.277563   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:57:32.298928   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:57:32.321037   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 17:57:32.342399   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 17:57:32.363856   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:57:32.385891   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:57:32.408600   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:57:32.430317   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:57:32.453400   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:57:32.476713   86299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:57:32.492709   86299 ssh_runner.go:195] Run: openssl version
	I0814 17:57:32.498690   86299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:57:32.509183   86299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:57:32.513273   86299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:57:32.513316   86299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:57:32.519038   86299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:57:32.529124   86299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:57:32.539264   86299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:57:32.543407   86299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:57:32.543464   86299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:57:32.549026   86299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:57:32.559392   86299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:57:32.570264   86299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:57:32.575210   86299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:57:32.575267   86299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:57:32.581151   86299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:57:32.594134   86299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:57:32.600886   86299 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 17:57:32.600933   86299 kubeadm.go:392] StartCluster: {Name:newest-cni-471541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:57:32.601000   86299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:57:32.601060   86299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:57:32.657188   86299 cri.go:89] found id: ""
	I0814 17:57:32.657255   86299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:57:32.667184   86299 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:57:32.676720   86299 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:57:32.685231   86299 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:57:32.685249   86299 kubeadm.go:157] found existing configuration files:
	
	I0814 17:57:32.685290   86299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:57:32.694192   86299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:57:32.694272   86299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:57:32.703728   86299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:57:32.712273   86299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:57:32.712325   86299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:57:32.721370   86299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:57:32.730304   86299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:57:32.730397   86299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:57:32.739898   86299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:57:32.748154   86299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:57:32.748226   86299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:57:32.757221   86299 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:57:32.855261   86299 kubeadm.go:310] W0814 17:57:32.836323     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:57:32.856181   86299 kubeadm.go:310] W0814 17:57:32.837336     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:57:32.956543   86299 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:57:43.407080   86299 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:57:43.407168   86299 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:57:43.407266   86299 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:57:43.407453   86299 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:57:43.407593   86299 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:57:43.407693   86299 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:57:43.410365   86299 out.go:204]   - Generating certificates and keys ...
	I0814 17:57:43.410456   86299 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:57:43.410542   86299 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:57:43.410641   86299 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 17:57:43.410698   86299 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 17:57:43.410786   86299 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 17:57:43.410870   86299 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 17:57:43.410950   86299 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 17:57:43.411115   86299 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-471541] and IPs [192.168.72.111 127.0.0.1 ::1]
	I0814 17:57:43.411189   86299 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 17:57:43.411315   86299 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-471541] and IPs [192.168.72.111 127.0.0.1 ::1]
	I0814 17:57:43.411447   86299 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 17:57:43.411563   86299 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 17:57:43.411617   86299 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 17:57:43.411687   86299 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:57:43.411764   86299 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:57:43.411863   86299 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:57:43.411937   86299 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:57:43.411998   86299 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:57:43.412049   86299 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:57:43.412134   86299 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:57:43.412231   86299 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:57:43.413378   86299 out.go:204]   - Booting up control plane ...
	I0814 17:57:43.413484   86299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:57:43.413602   86299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:57:43.413683   86299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:57:43.413815   86299 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:57:43.414003   86299 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:57:43.414039   86299 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:57:43.414214   86299 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:57:43.414324   86299 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:57:43.414396   86299 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.4461ms
	I0814 17:57:43.414498   86299 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:57:43.414591   86299 kubeadm.go:310] [api-check] The API server is healthy after 6.001525382s
	I0814 17:57:43.414704   86299 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:57:43.414823   86299 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:57:43.414912   86299 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:57:43.415181   86299 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-471541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:57:43.415282   86299 kubeadm.go:310] [bootstrap-token] Using token: mnlq2m.zz0pj7oikraspg1j
	I0814 17:57:43.416874   86299 out.go:204]   - Configuring RBAC rules ...
	I0814 17:57:43.416992   86299 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:57:43.417088   86299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:57:43.417229   86299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:57:43.417385   86299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:57:43.417552   86299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:57:43.417680   86299 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:57:43.417780   86299 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:57:43.417818   86299 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:57:43.417857   86299 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:57:43.417862   86299 kubeadm.go:310] 
	I0814 17:57:43.417910   86299 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:57:43.417915   86299 kubeadm.go:310] 
	I0814 17:57:43.418009   86299 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:57:43.418027   86299 kubeadm.go:310] 
	I0814 17:57:43.418067   86299 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:57:43.418154   86299 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:57:43.418223   86299 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:57:43.418232   86299 kubeadm.go:310] 
	I0814 17:57:43.418313   86299 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:57:43.418326   86299 kubeadm.go:310] 
	I0814 17:57:43.418398   86299 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:57:43.418406   86299 kubeadm.go:310] 
	I0814 17:57:43.418477   86299 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:57:43.418576   86299 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:57:43.418674   86299 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:57:43.418682   86299 kubeadm.go:310] 
	I0814 17:57:43.418780   86299 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:57:43.418878   86299 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:57:43.418888   86299 kubeadm.go:310] 
	I0814 17:57:43.418972   86299 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mnlq2m.zz0pj7oikraspg1j \
	I0814 17:57:43.419057   86299 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:57:43.419081   86299 kubeadm.go:310] 	--control-plane 
	I0814 17:57:43.419087   86299 kubeadm.go:310] 
	I0814 17:57:43.419175   86299 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:57:43.419184   86299 kubeadm.go:310] 
	I0814 17:57:43.419281   86299 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mnlq2m.zz0pj7oikraspg1j \
	I0814 17:57:43.419416   86299 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:57:43.419432   86299 cni.go:84] Creating CNI manager for ""
	I0814 17:57:43.419438   86299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:57:43.421053   86299 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:57:43.422382   86299 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:57:43.434351   86299 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:57:43.455308   86299 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:57:43.455408   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:43.455463   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-471541 minikube.k8s.io/updated_at=2024_08_14T17_57_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=newest-cni-471541 minikube.k8s.io/primary=true
	I0814 17:57:43.488133   86299 ops.go:34] apiserver oom_adj: -16
	I0814 17:57:43.683870   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:44.183959   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:44.684056   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:45.184879   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:45.684953   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:46.184588   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:46.684165   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:47.183993   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:47.684695   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:47.767751   86299 kubeadm.go:1113] duration metric: took 4.312422138s to wait for elevateKubeSystemPrivileges
	I0814 17:57:47.767777   86299 kubeadm.go:394] duration metric: took 15.166847499s to StartCluster
	I0814 17:57:47.767796   86299 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:47.767878   86299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:57:47.770091   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:47.770333   86299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 17:57:47.770361   86299 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:57:47.770456   86299 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:57:47.770535   86299 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-471541"
	I0814 17:57:47.770540   86299 config.go:182] Loaded profile config "newest-cni-471541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:57:47.770563   86299 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-471541"
	I0814 17:57:47.770552   86299 addons.go:69] Setting default-storageclass=true in profile "newest-cni-471541"
	I0814 17:57:47.770611   86299 host.go:66] Checking if "newest-cni-471541" exists ...
	I0814 17:57:47.770712   86299 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-471541"
	I0814 17:57:47.771044   86299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:57:47.771080   86299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:57:47.771165   86299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:57:47.771206   86299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:57:47.772982   86299 out.go:177] * Verifying Kubernetes components...
	I0814 17:57:47.774337   86299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:57:47.786878   86299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46607
	I0814 17:57:47.787267   86299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34143
	I0814 17:57:47.787406   86299 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:57:47.787727   86299 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:57:47.787963   86299 main.go:141] libmachine: Using API Version  1
	I0814 17:57:47.787982   86299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:57:47.788213   86299 main.go:141] libmachine: Using API Version  1
	I0814 17:57:47.788232   86299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:57:47.788310   86299 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:57:47.788572   86299 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:57:47.788734   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:57:47.788896   86299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:57:47.788940   86299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:57:47.792639   86299 addons.go:234] Setting addon default-storageclass=true in "newest-cni-471541"
	I0814 17:57:47.792673   86299 host.go:66] Checking if "newest-cni-471541" exists ...
	I0814 17:57:47.792979   86299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:57:47.793021   86299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:57:47.805578   86299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35489
	I0814 17:57:47.806106   86299 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:57:47.806668   86299 main.go:141] libmachine: Using API Version  1
	I0814 17:57:47.806696   86299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:57:47.807101   86299 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:57:47.807301   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:57:47.809164   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:47.809613   86299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43303
	I0814 17:57:47.810207   86299 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:57:47.810697   86299 main.go:141] libmachine: Using API Version  1
	I0814 17:57:47.810723   86299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:57:47.811033   86299 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:57:47.811220   86299 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:57:47.811589   86299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:57:47.811622   86299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:57:47.812922   86299 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:57:47.812940   86299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:57:47.812959   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:47.816598   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:47.817246   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:47.817284   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:47.817528   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:47.817723   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:47.817983   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:47.818133   86299 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:57:47.828348   86299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45611
	I0814 17:57:47.828776   86299 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:57:47.829259   86299 main.go:141] libmachine: Using API Version  1
	I0814 17:57:47.829276   86299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:57:47.829624   86299 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:57:47.829817   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:57:47.831531   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:47.831875   86299 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:57:47.831895   86299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:57:47.831914   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:47.834448   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:47.834823   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:47.834862   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:47.834971   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:47.835159   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:47.835285   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:47.835430   86299 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:57:48.041626   86299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 17:57:48.070674   86299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:57:48.258913   86299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:57:48.335827   86299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:57:48.631875   86299 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0814 17:57:48.633914   86299 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:57:48.633975   86299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:57:48.838007   86299 main.go:141] libmachine: Making call to close driver server
	I0814 17:57:48.838035   86299 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:57:48.838433   86299 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:57:48.838462   86299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:57:48.838470   86299 main.go:141] libmachine: Making call to close driver server
	I0814 17:57:48.838479   86299 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:57:48.838436   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:57:48.838714   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:57:48.838754   86299 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:57:48.838763   86299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:57:48.862868   86299 main.go:141] libmachine: Making call to close driver server
	I0814 17:57:48.862893   86299 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:57:48.863203   86299 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:57:48.863226   86299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:57:48.863296   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:57:49.139527   86299 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-471541" context rescaled to 1 replicas
	I0814 17:57:49.410997   86299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.075113426s)
	I0814 17:57:49.411050   86299 main.go:141] libmachine: Making call to close driver server
	I0814 17:57:49.411063   86299 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:57:49.411091   86299 api_server.go:72] duration metric: took 1.640695889s to wait for apiserver process to appear ...
	I0814 17:57:49.411118   86299 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:57:49.411140   86299 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0814 17:57:49.411379   86299 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:57:49.411393   86299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:57:49.411401   86299 main.go:141] libmachine: Making call to close driver server
	I0814 17:57:49.411407   86299 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:57:49.411627   86299 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:57:49.411642   86299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:57:49.413818   86299 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0814 17:57:49.415351   86299 addons.go:510] duration metric: took 1.644910599s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0814 17:57:49.423444   86299 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0814 17:57:49.429372   86299 api_server.go:141] control plane version: v1.31.0
	I0814 17:57:49.429398   86299 api_server.go:131] duration metric: took 18.273168ms to wait for apiserver health ...
	I0814 17:57:49.429407   86299 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:57:49.445146   86299 system_pods.go:59] 8 kube-system pods found
	I0814 17:57:49.445193   86299 system_pods.go:61] "coredns-6f6b679f8f-7mjxm" [2e18a55f-6371-4dae-98ae-96f35bd3e715] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:57:49.445206   86299 system_pods.go:61] "coredns-6f6b679f8f-qwgrb" [19a7dcc5-a7ef-4c1a-8d2b-f9fe4dcac290] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:57:49.445217   86299 system_pods.go:61] "etcd-newest-cni-471541" [b2a40767-5297-4676-b579-146172237eb4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:57:49.445230   86299 system_pods.go:61] "kube-apiserver-newest-cni-471541" [72c91661-d5b6-4b97-b8e4-811b7a8f6651] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:57:49.445239   86299 system_pods.go:61] "kube-controller-manager-newest-cni-471541" [148d4870-d2c0-438e-9b5c-85640f20db45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:57:49.445250   86299 system_pods.go:61] "kube-proxy-smtcr" [63ede546-1b98-4f05-8500-8a35f2fe52ab] Running
	I0814 17:57:49.445259   86299 system_pods.go:61] "kube-scheduler-newest-cni-471541" [b3192192-0c5b-485c-acc7-b14d6b8e5baf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:57:49.445263   86299 system_pods.go:61] "storage-provisioner" [8b2208e6-577e-4f6d-90e3-2213b2bd5b7a] Pending
	I0814 17:57:49.445272   86299 system_pods.go:74] duration metric: took 15.858724ms to wait for pod list to return data ...
	I0814 17:57:49.445281   86299 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:57:49.456269   86299 default_sa.go:45] found service account: "default"
	I0814 17:57:49.456290   86299 default_sa.go:55] duration metric: took 11.003035ms for default service account to be created ...
	I0814 17:57:49.456301   86299 kubeadm.go:582] duration metric: took 1.685910971s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0814 17:57:49.456315   86299 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:57:49.461967   86299 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:57:49.461993   86299 node_conditions.go:123] node cpu capacity is 2
	I0814 17:57:49.462006   86299 node_conditions.go:105] duration metric: took 5.685538ms to run NodePressure ...
	I0814 17:57:49.462020   86299 start.go:241] waiting for startup goroutines ...
	I0814 17:57:49.462028   86299 start.go:246] waiting for cluster config update ...
	I0814 17:57:49.462041   86299 start.go:255] writing updated cluster config ...
	I0814 17:57:49.462353   86299 ssh_runner.go:195] Run: rm -f paused
	I0814 17:57:49.521476   86299 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:57:49.524006   86299 out.go:177] * Done! kubectl is now configured to use "newest-cni-471541" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.634097436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658275634064990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4765b786-d5d2-45c7-ae42-f77119261462 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.634975117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4d45da0-1d1e-40d1-b3a5-76e762d3ba9c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.635072207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4d45da0-1d1e-40d1-b3a5-76e762d3ba9c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.635348649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657052262629168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b98873700,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90b87828591b4c4edd21b3d179b225801cfadef171565630f1a4c8f99d09d,PodSandboxId:4b58f8b06e1f749b5e6a27770f77d7563e20563ad0cc471b67bf9a23a0f1a664,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723657032167672774,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 876cfcd4-be4c-422c-ad8f-ae89b22dd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03,PodSandboxId:ad3f0ae523e518364f6f622e4d020df4dfd1cea426663069205035ee58b36e59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657029063972969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kccp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db961449-4326-4700-a3e0-c11ab96df3ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052,PodSandboxId:44e239110b45273bc0be17f5aaf2671e4a5e326a971b2c9a8bb51af18f63fd8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657021522233967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8x9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ae0e0-8205-4854-8
2ba-0119b81efe2a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723657021434577976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b988737
00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c,PodSandboxId:052932072aaab2c6ff9bf917cf2a22c41d19c556251b965dbda2e082f75f2b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657016670697236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7d3f0a71a520824ed292b415206ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5,PodSandboxId:a7ac6ee82c686b17e2ce738219d93a766ecc163ca9b2f4544661248fe6dd90ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657016685526970,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e316ea113121d01cd33357150ae58e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535,PodSandboxId:1aeed98a248b5f70f1569fe266a3e9ce237d924d14b03dad43555518bf176277,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657016697439814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c60fab48b6bac6cf28be63793c0d8b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0,PodSandboxId:b00f8d6289491d6c22fdd416eacc08a9c61849e5a8f4cb98842428721eb3ee84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657016687583333,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b45f6e13fda13d3dc38c3cda0c2b93c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4d45da0-1d1e-40d1-b3a5-76e762d3ba9c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.678852233Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13c0e3a5-38cc-4d54-84fc-1b1870b97805 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.678932553Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13c0e3a5-38cc-4d54-84fc-1b1870b97805 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.680574620Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2417c54a-78c4-4da2-bef6-a8181a8fa97c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.680985193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658275680958046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2417c54a-78c4-4da2-bef6-a8181a8fa97c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.681759864Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02713d7d-58d1-4ab3-b7c2-09a4e0c1c43c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.681815202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02713d7d-58d1-4ab3-b7c2-09a4e0c1c43c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.682014622Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657052262629168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b98873700,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90b87828591b4c4edd21b3d179b225801cfadef171565630f1a4c8f99d09d,PodSandboxId:4b58f8b06e1f749b5e6a27770f77d7563e20563ad0cc471b67bf9a23a0f1a664,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723657032167672774,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 876cfcd4-be4c-422c-ad8f-ae89b22dd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03,PodSandboxId:ad3f0ae523e518364f6f622e4d020df4dfd1cea426663069205035ee58b36e59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657029063972969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kccp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db961449-4326-4700-a3e0-c11ab96df3ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052,PodSandboxId:44e239110b45273bc0be17f5aaf2671e4a5e326a971b2c9a8bb51af18f63fd8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657021522233967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8x9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ae0e0-8205-4854-8
2ba-0119b81efe2a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723657021434577976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b988737
00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c,PodSandboxId:052932072aaab2c6ff9bf917cf2a22c41d19c556251b965dbda2e082f75f2b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657016670697236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7d3f0a71a520824ed292b415206ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5,PodSandboxId:a7ac6ee82c686b17e2ce738219d93a766ecc163ca9b2f4544661248fe6dd90ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657016685526970,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e316ea113121d01cd33357150ae58e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535,PodSandboxId:1aeed98a248b5f70f1569fe266a3e9ce237d924d14b03dad43555518bf176277,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657016697439814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c60fab48b6bac6cf28be63793c0d8b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0,PodSandboxId:b00f8d6289491d6c22fdd416eacc08a9c61849e5a8f4cb98842428721eb3ee84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657016687583333,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b45f6e13fda13d3dc38c3cda0c2b93c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02713d7d-58d1-4ab3-b7c2-09a4e0c1c43c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.723663441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94cd925a-c0b8-4a16-a7b6-87db08f7b6bc name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.723797713Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94cd925a-c0b8-4a16-a7b6-87db08f7b6bc name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.724980059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e2bf33d-e66f-49ab-bfa0-e8f215dbe8ee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.725478657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658275725446818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e2bf33d-e66f-49ab-bfa0-e8f215dbe8ee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.726080826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d05d64f-c766-46dc-a7e0-5b6453111dc0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.726133620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d05d64f-c766-46dc-a7e0-5b6453111dc0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.726318837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657052262629168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b98873700,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90b87828591b4c4edd21b3d179b225801cfadef171565630f1a4c8f99d09d,PodSandboxId:4b58f8b06e1f749b5e6a27770f77d7563e20563ad0cc471b67bf9a23a0f1a664,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723657032167672774,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 876cfcd4-be4c-422c-ad8f-ae89b22dd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03,PodSandboxId:ad3f0ae523e518364f6f622e4d020df4dfd1cea426663069205035ee58b36e59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657029063972969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kccp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db961449-4326-4700-a3e0-c11ab96df3ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052,PodSandboxId:44e239110b45273bc0be17f5aaf2671e4a5e326a971b2c9a8bb51af18f63fd8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657021522233967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8x9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ae0e0-8205-4854-8
2ba-0119b81efe2a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723657021434577976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b988737
00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c,PodSandboxId:052932072aaab2c6ff9bf917cf2a22c41d19c556251b965dbda2e082f75f2b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657016670697236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7d3f0a71a520824ed292b415206ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5,PodSandboxId:a7ac6ee82c686b17e2ce738219d93a766ecc163ca9b2f4544661248fe6dd90ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657016685526970,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e316ea113121d01cd33357150ae58e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535,PodSandboxId:1aeed98a248b5f70f1569fe266a3e9ce237d924d14b03dad43555518bf176277,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657016697439814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c60fab48b6bac6cf28be63793c0d8b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0,PodSandboxId:b00f8d6289491d6c22fdd416eacc08a9c61849e5a8f4cb98842428721eb3ee84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657016687583333,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b45f6e13fda13d3dc38c3cda0c2b93c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d05d64f-c766-46dc-a7e0-5b6453111dc0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.766190607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8be381a-d1a5-4c4c-8e4d-a8e24fca118a name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.766265285Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8be381a-d1a5-4c4c-8e4d-a8e24fca118a name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.767460546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43110bcf-11a5-445d-b25f-640ecba5eecb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.767861256Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658275767837256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43110bcf-11a5-445d-b25f-640ecba5eecb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.768503929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ad32964-2de1-4c05-b6a8-a988d0d9bc26 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.768556441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ad32964-2de1-4c05-b6a8-a988d0d9bc26 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:55 embed-certs-309673 crio[729]: time="2024-08-14 17:57:55.768798504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657052262629168,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b98873700,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c90b87828591b4c4edd21b3d179b225801cfadef171565630f1a4c8f99d09d,PodSandboxId:4b58f8b06e1f749b5e6a27770f77d7563e20563ad0cc471b67bf9a23a0f1a664,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723657032167672774,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 876cfcd4-be4c-422c-ad8f-ae89b22dd9b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03,PodSandboxId:ad3f0ae523e518364f6f622e4d020df4dfd1cea426663069205035ee58b36e59,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657029063972969,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kccp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db961449-4326-4700-a3e0-c11ab96df3ae,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052,PodSandboxId:44e239110b45273bc0be17f5aaf2671e4a5e326a971b2c9a8bb51af18f63fd8b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657021522233967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-z8x9t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84ae0e0-8205-4854-8
2ba-0119b81efe2a,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94,PodSandboxId:27c056bb63e0e37fb3f45b889b1fa410083fc6253c7b54b55b759d873d2dad93,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723657021434577976,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c7d9343-7223-4e8a-9a23-151b988737
00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c,PodSandboxId:052932072aaab2c6ff9bf917cf2a22c41d19c556251b965dbda2e082f75f2b79,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657016670697236,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f7d3f0a71a520824ed292b415206ab,},Annotations:map[string]string{io.kube
rnetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5,PodSandboxId:a7ac6ee82c686b17e2ce738219d93a766ecc163ca9b2f4544661248fe6dd90ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657016685526970,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e316ea113121d01cd33357150ae58e,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535,PodSandboxId:1aeed98a248b5f70f1569fe266a3e9ce237d924d14b03dad43555518bf176277,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657016697439814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c60fab48b6bac6cf28be63793c0d8b,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0,PodSandboxId:b00f8d6289491d6c22fdd416eacc08a9c61849e5a8f4cb98842428721eb3ee84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657016687583333,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-309673,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b45f6e13fda13d3dc38c3cda0c2b93c,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ad32964-2de1-4c05-b6a8-a988d0d9bc26 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1c13e2694057       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   27c056bb63e0e       storage-provisioner
	01c90b8782859       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   4b58f8b06e1f7       busybox
	0ac264c97809e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      20 minutes ago      Running             coredns                   1                   ad3f0ae523e51       coredns-6f6b679f8f-kccp8
	4b094a20accac       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      20 minutes ago      Running             kube-proxy                1                   44e239110b452       kube-proxy-z8x9t
	bdac981ff1f5d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   27c056bb63e0e       storage-provisioner
	038cd12336322       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      20 minutes ago      Running             kube-controller-manager   1                   1aeed98a248b5       kube-controller-manager-embed-certs-309673
	221f94a9fa6af       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      20 minutes ago      Running             kube-apiserver            1                   b00f8d6289491       kube-apiserver-embed-certs-309673
	e2594588a11a2       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      20 minutes ago      Running             kube-scheduler            1                   a7ac6ee82c686       kube-scheduler-embed-certs-309673
	4b3a19329bb34       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      20 minutes ago      Running             etcd                      1                   052932072aaab       etcd-embed-certs-309673
	
	
	==> coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40830 - 21315 "HINFO IN 5442161632545793277.7934525811174230808. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017461639s
	
	
	==> describe nodes <==
	Name:               embed-certs-309673
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-309673
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=embed-certs-309673
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T17_29_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 17:29:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-309673
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 17:57:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 17:57:54 +0000   Wed, 14 Aug 2024 17:29:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 17:57:54 +0000   Wed, 14 Aug 2024 17:29:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 17:57:54 +0000   Wed, 14 Aug 2024 17:29:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 17:57:54 +0000   Wed, 14 Aug 2024 17:37:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.2
	  Hostname:    embed-certs-309673
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6300c9e9736b454195de57a9af7b141a
	  System UUID:                6300c9e9-736b-4541-95de-57a9af7b141a
	  Boot ID:                    bc806884-d868-4a06-95a7-574ce4bb3d49
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-6f6b679f8f-kccp8                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-309673                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-embed-certs-309673             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-embed-certs-309673    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-z8x9t                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-309673             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-6867b74b74-jflvw               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-309673 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-309673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-309673 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node embed-certs-309673 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-309673 event: Registered Node embed-certs-309673 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-309673 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-309673 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-309673 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node embed-certs-309673 event: Registered Node embed-certs-309673 in Controller
	
	
	==> dmesg <==
	[Aug14 17:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050667] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037725] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.708486] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.832735] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.337162] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.871977] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.064439] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049353] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.194362] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.131576] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.292316] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.036513] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +1.657314] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +0.062766] kauditd_printk_skb: 158 callbacks suppressed
	[Aug14 17:37] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.390624] systemd-fstab-generator[1547]: Ignoring "noauto" option for root device
	[  +3.328081] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.145848] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] <==
	{"level":"info","ts":"2024-08-14T17:36:59.066691Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:36:59.068029Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:36:59.068043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T17:36:59.069138Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.2:2379"}
	{"level":"warn","ts":"2024-08-14T17:37:17.176081Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.80244ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1264826851580686260 > lease_revoke:<id:118d9151f6d7ae25>","response":"size:28"}
	{"level":"info","ts":"2024-08-14T17:37:17.176178Z","caller":"traceutil/trace.go:171","msg":"trace[1594452547] linearizableReadLoop","detail":"{readStateIndex:617; appliedIndex:616; }","duration":"367.742359ms","start":"2024-08-14T17:37:16.808424Z","end":"2024-08-14T17:37:17.176166Z","steps":["trace[1594452547] 'read index received'  (duration: 142.716559ms)","trace[1594452547] 'applied index is now lower than readState.Index'  (duration: 225.0249ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T17:37:17.176295Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"367.848597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-309673\" ","response":"range_response_count:1 size:5478"}
	{"level":"info","ts":"2024-08-14T17:37:17.176310Z","caller":"traceutil/trace.go:171","msg":"trace[1339079020] range","detail":"{range_begin:/registry/minions/embed-certs-309673; range_end:; response_count:1; response_revision:581; }","duration":"367.884768ms","start":"2024-08-14T17:37:16.808420Z","end":"2024-08-14T17:37:17.176305Z","steps":["trace[1339079020] 'agreement among raft nodes before linearized reading'  (duration: 367.778713ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T17:37:17.176330Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T17:37:16.808354Z","time spent":"367.971599ms","remote":"127.0.0.1:48778","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":5501,"request content":"key:\"/registry/minions/embed-certs-309673\" "}
	{"level":"warn","ts":"2024-08-14T17:37:37.268589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.78448ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1264826851580686436 > lease_revoke:<id:118d9151f6d7afe0>","response":"size:28"}
	{"level":"info","ts":"2024-08-14T17:37:37.268692Z","caller":"traceutil/trace.go:171","msg":"trace[1828047830] linearizableReadLoop","detail":"{readStateIndex:640; appliedIndex:639; }","duration":"336.087182ms","start":"2024-08-14T17:37:36.932593Z","end":"2024-08-14T17:37:37.268680Z","steps":["trace[1828047830] 'read index received'  (duration: 108.116144ms)","trace[1828047830] 'applied index is now lower than readState.Index'  (duration: 227.969364ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T17:37:37.268854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"336.249289ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-jflvw\" ","response":"range_response_count:1 size:4382"}
	{"level":"info","ts":"2024-08-14T17:37:37.268874Z","caller":"traceutil/trace.go:171","msg":"trace[359000852] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-6867b74b74-jflvw; range_end:; response_count:1; response_revision:600; }","duration":"336.278289ms","start":"2024-08-14T17:37:36.932589Z","end":"2024-08-14T17:37:37.268867Z","steps":["trace[359000852] 'agreement among raft nodes before linearized reading'  (duration: 336.171215ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T17:37:37.268899Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T17:37:36.932557Z","time spent":"336.336755ms","remote":"127.0.0.1:48788","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4405,"request content":"key:\"/registry/pods/kube-system/metrics-server-6867b74b74-jflvw\" "}
	{"level":"info","ts":"2024-08-14T17:46:59.107108Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":825}
	{"level":"info","ts":"2024-08-14T17:46:59.116969Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":825,"took":"9.322581ms","hash":638957722,"current-db-size-bytes":2637824,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2637824,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-08-14T17:46:59.117097Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":638957722,"revision":825,"compact-revision":-1}
	{"level":"info","ts":"2024-08-14T17:51:59.113414Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1067}
	{"level":"info","ts":"2024-08-14T17:51:59.117196Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1067,"took":"3.33609ms","hash":1820767333,"current-db-size-bytes":2637824,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-14T17:51:59.117284Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1820767333,"revision":1067,"compact-revision":825}
	{"level":"info","ts":"2024-08-14T17:56:59.122956Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1311}
	{"level":"info","ts":"2024-08-14T17:56:59.127068Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1311,"took":"3.815113ms","hash":4135578874,"current-db-size-bytes":2637824,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-14T17:56:59.127122Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4135578874,"revision":1311,"compact-revision":1067}
	{"level":"warn","ts":"2024-08-14T17:57:33.147356Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.854927ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T17:57:33.147461Z","caller":"traceutil/trace.go:171","msg":"trace[334357529] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1582; }","duration":"106.01435ms","start":"2024-08-14T17:57:33.041425Z","end":"2024-08-14T17:57:33.147439Z","steps":["trace[334357529] 'range keys from in-memory index tree'  (duration: 105.836624ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:57:56 up 21 min,  0 users,  load average: 0.00, 0.04, 0.06
	Linux embed-certs-309673 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] <==
	I0814 17:53:01.447877       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:53:01.447934       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:55:01.448273       1 handler_proxy.go:99] no RequestInfo found in the context
	W0814 17:55:01.448522       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:55:01.448729       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0814 17:55:01.448722       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0814 17:55:01.449916       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:55:01.449958       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:57:00.447980       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:57:00.448116       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0814 17:57:01.450440       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:57:01.450517       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0814 17:57:01.450616       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:57:01.450713       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 17:57:01.451662       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:57:01.451790       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] <==
	I0814 17:52:34.593157       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:52:49.306660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-309673"
	E0814 17:53:04.136494       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:53:04.601312       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:53:18.047619       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="255.606µs"
	I0814 17:53:29.051250       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="173.454µs"
	E0814 17:53:34.142815       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:53:34.609250       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:54:04.149185       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:54:04.617153       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:54:34.154893       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:54:34.628911       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:55:04.162123       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:55:04.636057       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:55:34.167868       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:55:34.642683       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:56:04.175267       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:56:04.650418       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:56:34.181343       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:56:34.658943       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:57:04.190150       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:57:04.667992       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:57:34.195881       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:57:34.675749       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:57:54.550809       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-309673"
	
	
	==> kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 17:37:01.713984       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 17:37:01.727272       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.2"]
	E0814 17:37:01.727345       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 17:37:01.758802       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 17:37:01.758843       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 17:37:01.758873       1 server_linux.go:169] "Using iptables Proxier"
	I0814 17:37:01.761118       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 17:37:01.761400       1 server.go:483] "Version info" version="v1.31.0"
	I0814 17:37:01.761453       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:37:01.763287       1 config.go:197] "Starting service config controller"
	I0814 17:37:01.763322       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 17:37:01.763350       1 config.go:104] "Starting endpoint slice config controller"
	I0814 17:37:01.763450       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 17:37:01.764318       1 config.go:326] "Starting node config controller"
	I0814 17:37:01.764338       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 17:37:01.863610       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 17:37:01.863635       1 shared_informer.go:320] Caches are synced for service config
	I0814 17:37:01.865062       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] <==
	I0814 17:36:57.815171       1 serving.go:386] Generated self-signed cert in-memory
	W0814 17:37:00.349245       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0814 17:37:00.350435       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0814 17:37:00.350504       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0814 17:37:00.350530       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0814 17:37:00.429795       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0814 17:37:00.431416       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:37:00.442433       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0814 17:37:00.444510       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0814 17:37:00.445628       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0814 17:37:00.444531       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0814 17:37:00.546804       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 17:56:55 embed-certs-309673 kubelet[936]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 17:56:55 embed-certs-309673 kubelet[936]: E0814 17:56:55.319159     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658215318876509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:56:55 embed-certs-309673 kubelet[936]: E0814 17:56:55.319196     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658215318876509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:00 embed-certs-309673 kubelet[936]: E0814 17:57:00.030445     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jflvw" podUID="69a57151-6948-46ea-bacf-0915ea90fe44"
	Aug 14 17:57:05 embed-certs-309673 kubelet[936]: E0814 17:57:05.320633     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658225320307708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:05 embed-certs-309673 kubelet[936]: E0814 17:57:05.320705     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658225320307708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:15 embed-certs-309673 kubelet[936]: E0814 17:57:15.032070     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jflvw" podUID="69a57151-6948-46ea-bacf-0915ea90fe44"
	Aug 14 17:57:15 embed-certs-309673 kubelet[936]: E0814 17:57:15.322687     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658235322275194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:15 embed-certs-309673 kubelet[936]: E0814 17:57:15.322736     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658235322275194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:25 embed-certs-309673 kubelet[936]: E0814 17:57:25.324569     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658245324228865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:25 embed-certs-309673 kubelet[936]: E0814 17:57:25.324610     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658245324228865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:27 embed-certs-309673 kubelet[936]: E0814 17:57:27.030808     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jflvw" podUID="69a57151-6948-46ea-bacf-0915ea90fe44"
	Aug 14 17:57:35 embed-certs-309673 kubelet[936]: E0814 17:57:35.326569     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658255326082753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:35 embed-certs-309673 kubelet[936]: E0814 17:57:35.326618     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658255326082753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:39 embed-certs-309673 kubelet[936]: E0814 17:57:39.031227     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jflvw" podUID="69a57151-6948-46ea-bacf-0915ea90fe44"
	Aug 14 17:57:45 embed-certs-309673 kubelet[936]: E0814 17:57:45.328785     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658265328498276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:45 embed-certs-309673 kubelet[936]: E0814 17:57:45.329041     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658265328498276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:54 embed-certs-309673 kubelet[936]: E0814 17:57:54.031543     936 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jflvw" podUID="69a57151-6948-46ea-bacf-0915ea90fe44"
	Aug 14 17:57:55 embed-certs-309673 kubelet[936]: E0814 17:57:55.058593     936 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 17:57:55 embed-certs-309673 kubelet[936]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 17:57:55 embed-certs-309673 kubelet[936]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 17:57:55 embed-certs-309673 kubelet[936]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 17:57:55 embed-certs-309673 kubelet[936]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 17:57:55 embed-certs-309673 kubelet[936]: E0814 17:57:55.331677     936 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658275331236247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:55 embed-certs-309673 kubelet[936]: E0814 17:57:55.331760     936 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658275331236247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] <==
	I0814 17:37:32.380741       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 17:37:32.396157       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 17:37:32.396651       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 17:37:49.795649       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 17:37:49.795842       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-309673_4af1e128-7cf2-4ab5-972d-f997e49c2728!
	I0814 17:37:49.800762       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"02efbc80-f5f3-44a2-acf2-74495f212cba", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-309673_4af1e128-7cf2-4ab5-972d-f997e49c2728 became leader
	I0814 17:37:49.896979       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-309673_4af1e128-7cf2-4ab5-972d-f997e49c2728!
	
	
	==> storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] <==
	I0814 17:37:01.581608       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0814 17:37:31.585836       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-309673 -n embed-certs-309673
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-309673 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-jflvw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-309673 describe pod metrics-server-6867b74b74-jflvw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-309673 describe pod metrics-server-6867b74b74-jflvw: exit status 1 (61.065993ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-jflvw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-309673 describe pod metrics-server-6867b74b74-jflvw: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (444.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (442.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-14 17:58:48.270371925 +0000 UTC m=+6561.955654726
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-885666 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-885666 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.698µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-885666 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-885666 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-885666 logs -n 25: (1.119461403s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-885666  | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC | 14 Aug 24 17:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC |                     |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-545149                  | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-505584        | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC | 14 Aug 24 17:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-309673                 | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-885666       | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:42 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-505584             | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC | 14 Aug 24 17:57 UTC |
	| start   | -p newest-cni-471541 --memory=2200 --alsologtostderr   | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC | 14 Aug 24 17:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-471541             | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC | 14 Aug 24 17:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-471541                                   | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC | 14 Aug 24 17:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC | 14 Aug 24 17:57 UTC |
	| delete  | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC | 14 Aug 24 17:57 UTC |
	| addons  | enable dashboard -p newest-cni-471541                  | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:58 UTC | 14 Aug 24 17:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-471541 --memory=2200 --alsologtostderr   | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:58 UTC | 14 Aug 24 17:58 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-471541 image list                           | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:58 UTC | 14 Aug 24 17:58 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-471541                                   | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:58 UTC | 14 Aug 24 17:58 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-471541                                   | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:58 UTC | 14 Aug 24 17:58 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-471541                                   | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:58 UTC | 14 Aug 24 17:58 UTC |
	| delete  | -p newest-cni-471541                                   | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:58 UTC | 14 Aug 24 17:58 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 17:58:01
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 17:58:01.104202   87217 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:58:01.104467   87217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:58:01.104478   87217 out.go:304] Setting ErrFile to fd 2...
	I0814 17:58:01.104485   87217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:58:01.104659   87217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:58:01.105188   87217 out.go:298] Setting JSON to false
	I0814 17:58:01.106165   87217 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9625,"bootTime":1723648656,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:58:01.106233   87217 start.go:139] virtualization: kvm guest
	I0814 17:58:01.108489   87217 out.go:177] * [newest-cni-471541] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:58:01.110012   87217 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:58:01.110038   87217 notify.go:220] Checking for updates...
	I0814 17:58:01.113071   87217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:58:01.114463   87217 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:58:01.116186   87217 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:58:01.117599   87217 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:58:01.118945   87217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:58:01.120719   87217 config.go:182] Loaded profile config "newest-cni-471541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:58:01.121137   87217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:58:01.121209   87217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:58:01.137261   87217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I0814 17:58:01.137693   87217 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:58:01.138406   87217 main.go:141] libmachine: Using API Version  1
	I0814 17:58:01.138431   87217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:58:01.138795   87217 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:58:01.139024   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:01.139305   87217 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:58:01.139651   87217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:58:01.139693   87217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:58:01.156451   87217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35067
	I0814 17:58:01.156976   87217 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:58:01.157454   87217 main.go:141] libmachine: Using API Version  1
	I0814 17:58:01.157479   87217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:58:01.157772   87217 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:58:01.157949   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:01.193943   87217 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 17:58:01.195014   87217 start.go:297] selected driver: kvm2
	I0814 17:58:01.195027   87217 start.go:901] validating driver "kvm2" against &{Name:newest-cni-471541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:58:01.195159   87217 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:58:01.195977   87217 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:58:01.196065   87217 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:58:01.211569   87217 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:58:01.211989   87217 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0814 17:58:01.212029   87217 cni.go:84] Creating CNI manager for ""
	I0814 17:58:01.212040   87217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:58:01.212082   87217 start.go:340] cluster config:
	{Name:newest-cni-471541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-471541 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:58:01.212204   87217 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:58:01.214059   87217 out.go:177] * Starting "newest-cni-471541" primary control-plane node in "newest-cni-471541" cluster
	I0814 17:58:01.215501   87217 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:58:01.215561   87217 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:58:01.215576   87217 cache.go:56] Caching tarball of preloaded images
	I0814 17:58:01.215684   87217 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:58:01.215700   87217 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 17:58:01.215828   87217 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/config.json ...
	I0814 17:58:01.216124   87217 start.go:360] acquireMachinesLock for newest-cni-471541: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:58:01.216191   87217 start.go:364] duration metric: took 37.924µs to acquireMachinesLock for "newest-cni-471541"
	I0814 17:58:01.216210   87217 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:58:01.216225   87217 fix.go:54] fixHost starting: 
	I0814 17:58:01.216527   87217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:58:01.216568   87217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:58:01.231678   87217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35069
	I0814 17:58:01.232066   87217 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:58:01.232527   87217 main.go:141] libmachine: Using API Version  1
	I0814 17:58:01.232545   87217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:58:01.232830   87217 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:58:01.233053   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:01.233346   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:58:01.235040   87217 fix.go:112] recreateIfNeeded on newest-cni-471541: state=Stopped err=<nil>
	I0814 17:58:01.235063   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	W0814 17:58:01.235243   87217 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:58:01.237263   87217 out.go:177] * Restarting existing kvm2 VM for "newest-cni-471541" ...
	I0814 17:58:01.239091   87217 main.go:141] libmachine: (newest-cni-471541) Calling .Start
	I0814 17:58:01.239270   87217 main.go:141] libmachine: (newest-cni-471541) Ensuring networks are active...
	I0814 17:58:01.240048   87217 main.go:141] libmachine: (newest-cni-471541) Ensuring network default is active
	I0814 17:58:01.240327   87217 main.go:141] libmachine: (newest-cni-471541) Ensuring network mk-newest-cni-471541 is active
	I0814 17:58:01.240633   87217 main.go:141] libmachine: (newest-cni-471541) Getting domain xml...
	I0814 17:58:01.241188   87217 main.go:141] libmachine: (newest-cni-471541) Creating domain...
	I0814 17:58:02.461229   87217 main.go:141] libmachine: (newest-cni-471541) Waiting to get IP...
	I0814 17:58:02.462098   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:02.462570   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:02.462639   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:02.462541   87252 retry.go:31] will retry after 297.092868ms: waiting for machine to come up
	I0814 17:58:02.760950   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:02.761437   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:02.761461   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:02.761395   87252 retry.go:31] will retry after 384.679844ms: waiting for machine to come up
	I0814 17:58:03.147809   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:03.148286   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:03.148312   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:03.148251   87252 retry.go:31] will retry after 293.642161ms: waiting for machine to come up
	I0814 17:58:03.443647   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:03.444192   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:03.444219   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:03.444144   87252 retry.go:31] will retry after 513.722834ms: waiting for machine to come up
	I0814 17:58:03.959948   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:03.960387   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:03.960427   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:03.960345   87252 retry.go:31] will retry after 757.957121ms: waiting for machine to come up
	I0814 17:58:04.720795   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:04.721105   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:04.721129   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:04.721082   87252 retry.go:31] will retry after 816.239705ms: waiting for machine to come up
	I0814 17:58:05.538962   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:05.539536   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:05.539559   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:05.539470   87252 retry.go:31] will retry after 838.157398ms: waiting for machine to come up
	I0814 17:58:06.379220   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:06.379653   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:06.379679   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:06.379616   87252 retry.go:31] will retry after 1.121196677s: waiting for machine to come up
	I0814 17:58:07.501954   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:07.502397   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:07.502418   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:07.502337   87252 retry.go:31] will retry after 1.765628495s: waiting for machine to come up
	I0814 17:58:09.269156   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:09.269576   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:09.269604   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:09.269545   87252 retry.go:31] will retry after 2.014698741s: waiting for machine to come up
	I0814 17:58:11.286035   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:11.286551   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:11.286584   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:11.286508   87252 retry.go:31] will retry after 1.944389558s: waiting for machine to come up
	I0814 17:58:13.233582   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:13.233981   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:13.234010   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:13.233931   87252 retry.go:31] will retry after 3.071180961s: waiting for machine to come up
	I0814 17:58:16.306770   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:16.307173   87217 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:58:16.307198   87217 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:58:16.307140   87252 retry.go:31] will retry after 3.172224151s: waiting for machine to come up
	I0814 17:58:19.482990   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.483427   87217 main.go:141] libmachine: (newest-cni-471541) Found IP for machine: 192.168.72.111
	I0814 17:58:19.483443   87217 main.go:141] libmachine: (newest-cni-471541) Reserving static IP address...
	I0814 17:58:19.483456   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has current primary IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.483847   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "newest-cni-471541", mac: "52:54:00:ae:15:ce", ip: "192.168.72.111"} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:19.483881   87217 main.go:141] libmachine: (newest-cni-471541) DBG | skip adding static IP to network mk-newest-cni-471541 - found existing host DHCP lease matching {name: "newest-cni-471541", mac: "52:54:00:ae:15:ce", ip: "192.168.72.111"}
	I0814 17:58:19.483897   87217 main.go:141] libmachine: (newest-cni-471541) Reserved static IP address: 192.168.72.111
	I0814 17:58:19.483911   87217 main.go:141] libmachine: (newest-cni-471541) Waiting for SSH to be available...
	I0814 17:58:19.483926   87217 main.go:141] libmachine: (newest-cni-471541) DBG | Getting to WaitForSSH function...
	I0814 17:58:19.485806   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.486097   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:19.486119   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.486178   87217 main.go:141] libmachine: (newest-cni-471541) DBG | Using SSH client type: external
	I0814 17:58:19.486203   87217 main.go:141] libmachine: (newest-cni-471541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa (-rw-------)
	I0814 17:58:19.486240   87217 main.go:141] libmachine: (newest-cni-471541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:58:19.486262   87217 main.go:141] libmachine: (newest-cni-471541) DBG | About to run SSH command:
	I0814 17:58:19.486283   87217 main.go:141] libmachine: (newest-cni-471541) DBG | exit 0
	I0814 17:58:19.611034   87217 main.go:141] libmachine: (newest-cni-471541) DBG | SSH cmd err, output: <nil>: 
	I0814 17:58:19.611348   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetConfigRaw
	I0814 17:58:19.611988   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetIP
	I0814 17:58:19.614580   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.614944   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:19.614969   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.615232   87217 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/config.json ...
	I0814 17:58:19.615499   87217 machine.go:94] provisionDockerMachine start ...
	I0814 17:58:19.615521   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:19.615713   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:19.618026   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.618355   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:19.618394   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.618523   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:19.618691   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:19.618854   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:19.619029   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:19.619190   87217 main.go:141] libmachine: Using SSH client type: native
	I0814 17:58:19.619417   87217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:58:19.619429   87217 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:58:19.727462   87217 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:58:19.727486   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetMachineName
	I0814 17:58:19.727736   87217 buildroot.go:166] provisioning hostname "newest-cni-471541"
	I0814 17:58:19.727772   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetMachineName
	I0814 17:58:19.727991   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:19.730634   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.731016   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:19.731046   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.731164   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:19.731365   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:19.731530   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:19.731728   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:19.731941   87217 main.go:141] libmachine: Using SSH client type: native
	I0814 17:58:19.732171   87217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:58:19.732189   87217 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-471541 && echo "newest-cni-471541" | sudo tee /etc/hostname
	I0814 17:58:19.852999   87217 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-471541
	
	I0814 17:58:19.853022   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:19.855570   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.855918   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:19.855945   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.856122   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:19.856335   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:19.856496   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:19.856671   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:19.856841   87217 main.go:141] libmachine: Using SSH client type: native
	I0814 17:58:19.856998   87217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:58:19.857013   87217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-471541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-471541/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-471541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:58:19.971935   87217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:58:19.971971   87217 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:58:19.972014   87217 buildroot.go:174] setting up certificates
	I0814 17:58:19.972031   87217 provision.go:84] configureAuth start
	I0814 17:58:19.972051   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetMachineName
	I0814 17:58:19.972324   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetIP
	I0814 17:58:19.975124   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.975565   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:19.975586   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.975723   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:19.978005   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.978365   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:19.978391   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:19.978509   87217 provision.go:143] copyHostCerts
	I0814 17:58:19.978576   87217 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:58:19.978586   87217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:58:19.978647   87217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:58:19.978742   87217 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:58:19.978750   87217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:58:19.978777   87217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:58:19.978876   87217 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:58:19.978886   87217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:58:19.978908   87217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:58:19.978964   87217 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.newest-cni-471541 san=[127.0.0.1 192.168.72.111 localhost minikube newest-cni-471541]
	I0814 17:58:20.269665   87217 provision.go:177] copyRemoteCerts
	I0814 17:58:20.269720   87217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:58:20.269743   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:20.272336   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.272647   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:20.272679   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.272801   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:20.272975   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:20.273106   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:20.273232   87217 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:58:20.353103   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:58:20.375207   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:58:20.396947   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:58:20.418588   87217 provision.go:87] duration metric: took 446.539867ms to configureAuth
	I0814 17:58:20.418615   87217 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:58:20.418773   87217 config.go:182] Loaded profile config "newest-cni-471541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:58:20.418834   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:20.421495   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.421912   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:20.421943   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.422041   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:20.422216   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:20.422359   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:20.422535   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:20.422724   87217 main.go:141] libmachine: Using SSH client type: native
	I0814 17:58:20.422880   87217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:58:20.422896   87217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:58:20.682543   87217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:58:20.682572   87217 machine.go:97] duration metric: took 1.067056781s to provisionDockerMachine
	I0814 17:58:20.682584   87217 start.go:293] postStartSetup for "newest-cni-471541" (driver="kvm2")
	I0814 17:58:20.682607   87217 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:58:20.682634   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:20.682941   87217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:58:20.682982   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:20.685781   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.686073   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:20.686104   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.686274   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:20.686468   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:20.686628   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:20.686768   87217 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:58:20.771093   87217 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:58:20.775111   87217 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:58:20.775134   87217 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:58:20.775197   87217 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:58:20.775274   87217 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:58:20.775391   87217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:58:20.784364   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:58:20.805972   87217 start.go:296] duration metric: took 123.364667ms for postStartSetup
	I0814 17:58:20.806009   87217 fix.go:56] duration metric: took 19.58979293s for fixHost
	I0814 17:58:20.806027   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:20.808290   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.808583   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:20.808607   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.808738   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:20.808923   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:20.809062   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:20.809188   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:20.809354   87217 main.go:141] libmachine: Using SSH client type: native
	I0814 17:58:20.809532   87217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:58:20.809543   87217 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:58:20.915821   87217 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723658300.892471974
	
	I0814 17:58:20.915853   87217 fix.go:216] guest clock: 1723658300.892471974
	I0814 17:58:20.915863   87217 fix.go:229] Guest: 2024-08-14 17:58:20.892471974 +0000 UTC Remote: 2024-08-14 17:58:20.806012732 +0000 UTC m=+19.735494460 (delta=86.459242ms)
	I0814 17:58:20.915887   87217 fix.go:200] guest clock delta is within tolerance: 86.459242ms
	I0814 17:58:20.915894   87217 start.go:83] releasing machines lock for "newest-cni-471541", held for 19.699692285s
	I0814 17:58:20.915920   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:20.916212   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetIP
	I0814 17:58:20.918768   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.919198   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:20.919228   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.919395   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:20.919909   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:20.920086   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:20.920167   87217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:58:20.920305   87217 ssh_runner.go:195] Run: cat /version.json
	I0814 17:58:20.920323   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:20.920323   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:20.922986   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.923227   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.923316   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:20.923358   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.923551   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:20.923628   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:20.923648   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:20.923755   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:20.923828   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:20.923926   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:20.924013   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:20.924065   87217 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:58:20.924127   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:20.924308   87217 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:58:20.999640   87217 ssh_runner.go:195] Run: systemctl --version
	I0814 17:58:21.035320   87217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:58:21.173472   87217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:58:21.179515   87217 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:58:21.179579   87217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:58:21.194244   87217 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:58:21.194270   87217 start.go:495] detecting cgroup driver to use...
	I0814 17:58:21.194336   87217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:58:21.209577   87217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:58:21.223007   87217 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:58:21.223074   87217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:58:21.236254   87217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:58:21.249322   87217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:58:21.355217   87217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:58:21.520699   87217 docker.go:233] disabling docker service ...
	I0814 17:58:21.520777   87217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:58:21.534044   87217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:58:21.546423   87217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:58:21.659706   87217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:58:21.775819   87217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:58:21.788524   87217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:58:21.806080   87217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:58:21.806140   87217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:58:21.815676   87217 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:58:21.815748   87217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:58:21.824973   87217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:58:21.834374   87217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:58:21.843827   87217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:58:21.853320   87217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:58:21.862647   87217 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:58:21.877491   87217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:58:21.886874   87217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:58:21.895355   87217 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:58:21.895405   87217 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:58:21.907518   87217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:58:21.922290   87217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:58:22.032615   87217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:58:22.164486   87217 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:58:22.164563   87217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:58:22.168986   87217 start.go:563] Will wait 60s for crictl version
	I0814 17:58:22.169039   87217 ssh_runner.go:195] Run: which crictl
	I0814 17:58:22.172241   87217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:58:22.211852   87217 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:58:22.211924   87217 ssh_runner.go:195] Run: crio --version
	I0814 17:58:22.237257   87217 ssh_runner.go:195] Run: crio --version
	I0814 17:58:22.268000   87217 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:58:22.269236   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetIP
	I0814 17:58:22.272056   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:22.272427   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:22.272459   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:22.272704   87217 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:58:22.276517   87217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:58:22.289264   87217 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0814 17:58:22.290495   87217 kubeadm.go:883] updating cluster {Name:newest-cni-471541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:58:22.290616   87217 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:58:22.290666   87217 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:58:22.324251   87217 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:58:22.324310   87217 ssh_runner.go:195] Run: which lz4
	I0814 17:58:22.327895   87217 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:58:22.331654   87217 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:58:22.331688   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:58:23.519367   87217 crio.go:462] duration metric: took 1.191497674s to copy over tarball
	I0814 17:58:23.519435   87217 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:58:25.590341   87217 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.070843703s)
	I0814 17:58:25.590384   87217 crio.go:469] duration metric: took 2.070977773s to extract the tarball
	I0814 17:58:25.590395   87217 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:58:25.632777   87217 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:58:25.672245   87217 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:58:25.672271   87217 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:58:25.672279   87217 kubeadm.go:934] updating node { 192.168.72.111 8443 v1.31.0 crio true true} ...
	I0814 17:58:25.672435   87217 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-471541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:58:25.672511   87217 ssh_runner.go:195] Run: crio config
	I0814 17:58:25.721216   87217 cni.go:84] Creating CNI manager for ""
	I0814 17:58:25.721233   87217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:58:25.721249   87217 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0814 17:58:25.721270   87217 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-471541 NodeName:newest-cni-471541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:58:25.721413   87217 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-471541"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:58:25.721470   87217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:58:25.731271   87217 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:58:25.731352   87217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:58:25.740740   87217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0814 17:58:25.755795   87217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:58:25.772254   87217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0814 17:58:25.791102   87217 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0814 17:58:25.794980   87217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:58:25.806751   87217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:58:25.936004   87217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:58:25.954795   87217 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541 for IP: 192.168.72.111
	I0814 17:58:25.954831   87217 certs.go:194] generating shared ca certs ...
	I0814 17:58:25.954859   87217 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:58:25.955045   87217 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:58:25.955106   87217 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:58:25.955122   87217 certs.go:256] generating profile certs ...
	I0814 17:58:25.955251   87217 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.key
	I0814 17:58:25.955390   87217 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key.5e517d6b
	I0814 17:58:25.955456   87217 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.key
	I0814 17:58:25.955615   87217 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:58:25.955649   87217 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:58:25.955660   87217 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:58:25.955683   87217 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:58:25.955717   87217 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:58:25.955750   87217 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:58:25.955795   87217 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:58:25.956598   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:58:26.014274   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:58:26.049967   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:58:26.079724   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:58:26.105661   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 17:58:26.134464   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 17:58:26.157434   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:58:26.179793   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:58:26.202087   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:58:26.224114   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:58:26.245893   87217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:58:26.267154   87217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:58:26.282237   87217 ssh_runner.go:195] Run: openssl version
	I0814 17:58:26.287572   87217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:58:26.297366   87217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:58:26.301300   87217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:58:26.301365   87217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:58:26.306628   87217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:58:26.316365   87217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:58:26.326661   87217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:58:26.330690   87217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:58:26.330736   87217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:58:26.335902   87217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:58:26.345721   87217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:58:26.355471   87217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:58:26.359273   87217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:58:26.359321   87217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:58:26.364393   87217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:58:26.374040   87217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:58:26.378169   87217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:58:26.383773   87217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:58:26.389226   87217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:58:26.394983   87217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:58:26.400444   87217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:58:26.405836   87217 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:58:26.411196   87217 kubeadm.go:392] StartCluster: {Name:newest-cni-471541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:58:26.411283   87217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:58:26.411376   87217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:58:26.456743   87217 cri.go:89] found id: ""
	I0814 17:58:26.456848   87217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:58:26.467693   87217 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:58:26.467713   87217 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:58:26.467759   87217 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:58:26.477602   87217 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:58:26.478175   87217 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-471541" does not appear in /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:58:26.478451   87217 kubeconfig.go:62] /home/jenkins/minikube-integration/19446-13977/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-471541" cluster setting kubeconfig missing "newest-cni-471541" context setting]
	I0814 17:58:26.478886   87217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:58:26.480138   87217 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:58:26.488913   87217 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.111
	I0814 17:58:26.488935   87217 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:58:26.488947   87217 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:58:26.488993   87217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:58:26.521504   87217 cri.go:89] found id: ""
	I0814 17:58:26.521603   87217 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:58:26.537284   87217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:58:26.546006   87217 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:58:26.546021   87217 kubeadm.go:157] found existing configuration files:
	
	I0814 17:58:26.546062   87217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:58:26.554539   87217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:58:26.554601   87217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:58:26.563474   87217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:58:26.572070   87217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:58:26.572129   87217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:58:26.581976   87217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:58:26.590635   87217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:58:26.590691   87217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:58:26.599614   87217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:58:26.608295   87217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:58:26.608354   87217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:58:26.617770   87217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:58:26.626851   87217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:58:26.744680   87217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:58:28.011548   87217 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.266831611s)
	I0814 17:58:28.011599   87217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:58:28.214315   87217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:58:28.281990   87217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:58:28.362903   87217 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:58:28.362996   87217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:58:28.863296   87217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:58:29.364014   87217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:58:29.863068   87217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:58:29.913535   87217 api_server.go:72] duration metric: took 1.55063282s to wait for apiserver process to appear ...
	I0814 17:58:29.913569   87217 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:58:29.913592   87217 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0814 17:58:29.914131   87217 api_server.go:269] stopped: https://192.168.72.111:8443/healthz: Get "https://192.168.72.111:8443/healthz": dial tcp 192.168.72.111:8443: connect: connection refused
	I0814 17:58:30.413707   87217 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0814 17:58:32.447259   87217 api_server.go:279] https://192.168.72.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:58:32.447292   87217 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:58:32.447307   87217 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0814 17:58:32.492446   87217 api_server.go:279] https://192.168.72.111:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:58:32.492480   87217 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:58:32.913906   87217 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0814 17:58:32.918240   87217 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:58:32.918264   87217 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:58:33.413766   87217 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0814 17:58:33.429719   87217 api_server.go:279] https://192.168.72.111:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:58:33.429745   87217 api_server.go:103] status: https://192.168.72.111:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:58:33.914320   87217 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0814 17:58:33.918442   87217 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0814 17:58:33.925507   87217 api_server.go:141] control plane version: v1.31.0
	I0814 17:58:33.925539   87217 api_server.go:131] duration metric: took 4.011962427s to wait for apiserver health ...
	I0814 17:58:33.925552   87217 cni.go:84] Creating CNI manager for ""
	I0814 17:58:33.925559   87217 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:58:33.927143   87217 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:58:33.928354   87217 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:58:33.946101   87217 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:58:33.968790   87217 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:58:33.980068   87217 system_pods.go:59] 8 kube-system pods found
	I0814 17:58:33.980131   87217 system_pods.go:61] "coredns-6f6b679f8f-qwgrb" [19a7dcc5-a7ef-4c1a-8d2b-f9fe4dcac290] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:58:33.980147   87217 system_pods.go:61] "etcd-newest-cni-471541" [b2a40767-5297-4676-b579-146172237eb4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:58:33.980164   87217 system_pods.go:61] "kube-apiserver-newest-cni-471541" [72c91661-d5b6-4b97-b8e4-811b7a8f6651] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:58:33.980177   87217 system_pods.go:61] "kube-controller-manager-newest-cni-471541" [148d4870-d2c0-438e-9b5c-85640f20db45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:58:33.980191   87217 system_pods.go:61] "kube-proxy-smtcr" [63ede546-1b98-4f05-8500-8a35f2fe52ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:58:33.980206   87217 system_pods.go:61] "kube-scheduler-newest-cni-471541" [b3192192-0c5b-485c-acc7-b14d6b8e5baf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:58:33.980221   87217 system_pods.go:61] "metrics-server-6867b74b74-2m6wv" [7bd266f5-ab3e-4a99-8919-08a21d009d53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:58:33.980235   87217 system_pods.go:61] "storage-provisioner" [8b2208e6-577e-4f6d-90e3-2213b2bd5b7a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:58:33.980246   87217 system_pods.go:74] duration metric: took 11.436652ms to wait for pod list to return data ...
	I0814 17:58:33.980260   87217 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:58:33.983855   87217 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:58:33.983886   87217 node_conditions.go:123] node cpu capacity is 2
	I0814 17:58:33.983900   87217 node_conditions.go:105] duration metric: took 3.63077ms to run NodePressure ...
	I0814 17:58:33.983931   87217 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:58:34.264852   87217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:58:34.280239   87217 ops.go:34] apiserver oom_adj: -16
	I0814 17:58:34.280263   87217 kubeadm.go:597] duration metric: took 7.812543375s to restartPrimaryControlPlane
	I0814 17:58:34.280275   87217 kubeadm.go:394] duration metric: took 7.869096555s to StartCluster
	I0814 17:58:34.280294   87217 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:58:34.280402   87217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:58:34.281533   87217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:58:34.281784   87217 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:58:34.281905   87217 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:58:34.281978   87217 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-471541"
	I0814 17:58:34.281982   87217 config.go:182] Loaded profile config "newest-cni-471541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:58:34.282012   87217 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-471541"
	I0814 17:58:34.282024   87217 addons.go:69] Setting metrics-server=true in profile "newest-cni-471541"
	I0814 17:58:34.282024   87217 addons.go:69] Setting default-storageclass=true in profile "newest-cni-471541"
	W0814 17:58:34.282040   87217 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:58:34.282039   87217 addons.go:69] Setting dashboard=true in profile "newest-cni-471541"
	I0814 17:58:34.282055   87217 addons.go:234] Setting addon metrics-server=true in "newest-cni-471541"
	W0814 17:58:34.282063   87217 addons.go:243] addon metrics-server should already be in state true
	I0814 17:58:34.282070   87217 host.go:66] Checking if "newest-cni-471541" exists ...
	I0814 17:58:34.282076   87217 addons.go:234] Setting addon dashboard=true in "newest-cni-471541"
	I0814 17:58:34.282075   87217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-471541"
	W0814 17:58:34.282086   87217 addons.go:243] addon dashboard should already be in state true
	I0814 17:58:34.282090   87217 host.go:66] Checking if "newest-cni-471541" exists ...
	I0814 17:58:34.282117   87217 host.go:66] Checking if "newest-cni-471541" exists ...
	I0814 17:58:34.282468   87217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:58:34.282468   87217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:58:34.282498   87217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:58:34.282517   87217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:58:34.282564   87217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:58:34.282591   87217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:58:34.282595   87217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:58:34.282610   87217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:58:34.283348   87217 out.go:177] * Verifying Kubernetes components...
	I0814 17:58:34.284822   87217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:58:34.298951   87217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46285
	I0814 17:58:34.299434   87217 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:58:34.299927   87217 main.go:141] libmachine: Using API Version  1
	I0814 17:58:34.299950   87217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:58:34.300306   87217 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:58:34.300876   87217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:58:34.300926   87217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:58:34.302216   87217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46529
	I0814 17:58:34.302404   87217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43563
	I0814 17:58:34.302525   87217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I0814 17:58:34.302599   87217 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:58:34.302822   87217 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:58:34.302921   87217 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:58:34.303064   87217 main.go:141] libmachine: Using API Version  1
	I0814 17:58:34.303081   87217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:58:34.303358   87217 main.go:141] libmachine: Using API Version  1
	I0814 17:58:34.303381   87217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:58:34.303478   87217 main.go:141] libmachine: Using API Version  1
	I0814 17:58:34.303499   87217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:58:34.303503   87217 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:58:34.303814   87217 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:58:34.303904   87217 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:58:34.304218   87217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:58:34.304246   87217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:58:34.304261   87217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:58:34.304284   87217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:58:34.304533   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:58:34.307454   87217 addons.go:234] Setting addon default-storageclass=true in "newest-cni-471541"
	W0814 17:58:34.307470   87217 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:58:34.307499   87217 host.go:66] Checking if "newest-cni-471541" exists ...
	I0814 17:58:34.307780   87217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:58:34.307798   87217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:58:34.321227   87217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35409
	I0814 17:58:34.321635   87217 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:58:34.321945   87217 main.go:141] libmachine: Using API Version  1
	I0814 17:58:34.321955   87217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:58:34.322161   87217 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:58:34.322287   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:58:34.322565   87217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I0814 17:58:34.322736   87217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37233
	I0814 17:58:34.323200   87217 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:58:34.323590   87217 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:58:34.323716   87217 main.go:141] libmachine: Using API Version  1
	I0814 17:58:34.323724   87217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:58:34.323986   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:34.323987   87217 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:58:34.324204   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:58:34.324760   87217 main.go:141] libmachine: Using API Version  1
	I0814 17:58:34.324773   87217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:58:34.324835   87217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35711
	I0814 17:58:34.325115   87217 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:58:34.325239   87217 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:58:34.325547   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:34.325751   87217 main.go:141] libmachine: Using API Version  1
	I0814 17:58:34.325767   87217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:58:34.325985   87217 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:58:34.326006   87217 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:58:34.326085   87217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:58:34.326102   87217 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:58:34.326304   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:58:34.327348   87217 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:58:34.327504   87217 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:58:34.327518   87217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:58:34.327537   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:34.327625   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:34.329050   87217 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:58:34.329063   87217 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:58:34.329089   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:34.329057   87217 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0814 17:58:34.330680   87217 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0814 17:58:34.330906   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:34.331563   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:34.331587   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:34.331836   87217 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0814 17:58:34.331854   87217 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0814 17:58:34.331883   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:34.332629   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:34.332818   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:34.333271   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:34.333298   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:34.333375   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:34.333590   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:34.333666   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:34.333730   87217 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:58:34.334032   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:34.334360   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:34.334525   87217 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:58:34.335726   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:34.336118   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:34.336149   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:34.336283   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:34.336467   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:34.336648   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:34.336797   87217 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:58:34.344421   87217 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36731
	I0814 17:58:34.344760   87217 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:58:34.345426   87217 main.go:141] libmachine: Using API Version  1
	I0814 17:58:34.345460   87217 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:58:34.345908   87217 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:58:34.346093   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:58:34.347426   87217 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:58:34.347637   87217 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:58:34.347647   87217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:58:34.347659   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:58:34.350464   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:34.350851   87217 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:58:11 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:58:34.350904   87217 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:58:34.351100   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:58:34.351242   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:58:34.351397   87217 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:58:34.351523   87217 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:58:34.469136   87217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:58:34.498375   87217 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:58:34.498508   87217 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:58:34.515798   87217 api_server.go:72] duration metric: took 233.976479ms to wait for apiserver process to appear ...
	I0814 17:58:34.515829   87217 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:58:34.515854   87217 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0814 17:58:34.523429   87217 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0814 17:58:34.525560   87217 api_server.go:141] control plane version: v1.31.0
	I0814 17:58:34.525582   87217 api_server.go:131] duration metric: took 9.745601ms to wait for apiserver health ...
	I0814 17:58:34.525589   87217 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:58:34.530903   87217 system_pods.go:59] 8 kube-system pods found
	I0814 17:58:34.530944   87217 system_pods.go:61] "coredns-6f6b679f8f-qwgrb" [19a7dcc5-a7ef-4c1a-8d2b-f9fe4dcac290] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:58:34.530974   87217 system_pods.go:61] "etcd-newest-cni-471541" [b2a40767-5297-4676-b579-146172237eb4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:58:34.530983   87217 system_pods.go:61] "kube-apiserver-newest-cni-471541" [72c91661-d5b6-4b97-b8e4-811b7a8f6651] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:58:34.530990   87217 system_pods.go:61] "kube-controller-manager-newest-cni-471541" [148d4870-d2c0-438e-9b5c-85640f20db45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:58:34.530994   87217 system_pods.go:61] "kube-proxy-smtcr" [63ede546-1b98-4f05-8500-8a35f2fe52ab] Running
	I0814 17:58:34.531000   87217 system_pods.go:61] "kube-scheduler-newest-cni-471541" [b3192192-0c5b-485c-acc7-b14d6b8e5baf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:58:34.531007   87217 system_pods.go:61] "metrics-server-6867b74b74-2m6wv" [7bd266f5-ab3e-4a99-8919-08a21d009d53] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:58:34.531019   87217 system_pods.go:61] "storage-provisioner" [8b2208e6-577e-4f6d-90e3-2213b2bd5b7a] Running
	I0814 17:58:34.531027   87217 system_pods.go:74] duration metric: took 5.432709ms to wait for pod list to return data ...
	I0814 17:58:34.531034   87217 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:58:34.533999   87217 default_sa.go:45] found service account: "default"
	I0814 17:58:34.534025   87217 default_sa.go:55] duration metric: took 2.982022ms for default service account to be created ...
	I0814 17:58:34.534039   87217 kubeadm.go:582] duration metric: took 252.222871ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0814 17:58:34.534057   87217 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:58:34.537131   87217 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:58:34.537150   87217 node_conditions.go:123] node cpu capacity is 2
	I0814 17:58:34.537159   87217 node_conditions.go:105] duration metric: took 3.096583ms to run NodePressure ...
	I0814 17:58:34.537169   87217 start.go:241] waiting for startup goroutines ...
	I0814 17:58:34.578202   87217 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0814 17:58:34.578225   87217 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0814 17:58:34.596340   87217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:58:34.600396   87217 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:58:34.600415   87217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:58:34.630105   87217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:58:34.638246   87217 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0814 17:58:34.638278   87217 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0814 17:58:34.660329   87217 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:58:34.660357   87217 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:58:34.740179   87217 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0814 17:58:34.740203   87217 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0814 17:58:34.741947   87217 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:58:34.741966   87217 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:58:34.772979   87217 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0814 17:58:34.772999   87217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0814 17:58:34.792364   87217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:58:34.814139   87217 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0814 17:58:34.814171   87217 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0814 17:58:34.859203   87217 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0814 17:58:34.859236   87217 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0814 17:58:34.924749   87217 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0814 17:58:34.924781   87217 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0814 17:58:35.006860   87217 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0814 17:58:35.006894   87217 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0814 17:58:35.055226   87217 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 17:58:35.055252   87217 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0814 17:58:35.070901   87217 main.go:141] libmachine: Making call to close driver server
	I0814 17:58:35.070948   87217 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:58:35.071209   87217 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:58:35.071226   87217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:58:35.071242   87217 main.go:141] libmachine: Making call to close driver server
	I0814 17:58:35.071250   87217 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:58:35.071483   87217 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:58:35.071499   87217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:58:35.071515   87217 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:58:35.077620   87217 main.go:141] libmachine: Making call to close driver server
	I0814 17:58:35.077644   87217 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:58:35.077904   87217 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:58:35.077942   87217 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:58:35.077952   87217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:58:35.096607   87217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0814 17:58:36.473147   87217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.842996385s)
	I0814 17:58:36.473207   87217 main.go:141] libmachine: Making call to close driver server
	I0814 17:58:36.473219   87217 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:58:36.473542   87217 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:58:36.473595   87217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:58:36.473614   87217 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:58:36.473626   87217 main.go:141] libmachine: Making call to close driver server
	I0814 17:58:36.473641   87217 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:58:36.473925   87217 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:58:36.473944   87217 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:58:36.473982   87217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:58:36.538302   87217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.745895545s)
	I0814 17:58:36.538365   87217 main.go:141] libmachine: Making call to close driver server
	I0814 17:58:36.538378   87217 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:58:36.538755   87217 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:58:36.538764   87217 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:58:36.538775   87217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:58:36.538791   87217 main.go:141] libmachine: Making call to close driver server
	I0814 17:58:36.538800   87217 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:58:36.539079   87217 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:58:36.539115   87217 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:58:36.539129   87217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:58:36.539146   87217 addons.go:475] Verifying addon metrics-server=true in "newest-cni-471541"
	I0814 17:58:36.851177   87217 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.754530129s)
	I0814 17:58:36.851230   87217 main.go:141] libmachine: Making call to close driver server
	I0814 17:58:36.851241   87217 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:58:36.851580   87217 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:58:36.851645   87217 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:58:36.851656   87217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:58:36.851671   87217 main.go:141] libmachine: Making call to close driver server
	I0814 17:58:36.851683   87217 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:58:36.851911   87217 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:58:36.851936   87217 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:58:36.851999   87217 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:58:36.853595   87217 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-471541 addons enable metrics-server
	
	I0814 17:58:36.855033   87217 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0814 17:58:36.856279   87217 addons.go:510] duration metric: took 2.574373979s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0814 17:58:36.856315   87217 start.go:246] waiting for cluster config update ...
	I0814 17:58:36.856333   87217 start.go:255] writing updated cluster config ...
	I0814 17:58:36.856556   87217 ssh_runner.go:195] Run: rm -f paused
	I0814 17:58:36.903278   87217 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:58:36.905151   87217 out.go:177] * Done! kubectl is now configured to use "newest-cni-471541" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.848377354Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d07556a9-a8dd-4046-9c9d-ea207a997c48 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.849767827Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4fa8b73d-7d65-4cff-a72b-730ec3441b1c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.850378971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658328850133814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4fa8b73d-7d65-4cff-a72b-730ec3441b1c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.850836107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29158a96-daf1-45f7-b956-e2ab930490ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.850887254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29158a96-daf1-45f7-b956-e2ab930490ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.851086094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:503e9df483a627bb3855cc575952c002326a861e96829096b407406eb5983f09,PodSandboxId:c85483bcc56c2a0d0777da1baa3907a957edc62433f65ad25cb4383190b20390,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657336334630107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-254cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb9ed10c9dfa9f82fa319eec929efc17c724147ce4ddb13fff131efd549474,PodSandboxId:ff00e43e463e38e4145902c004d052b6a2bcc839284155c096edb200afb06d1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657336327637917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5128eea6-234c-4aea-a9b7-835e840a31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c6b70d58c277e3b0387e086c84726ddcc3a03ccf7b66d2e89d918282324a2e,PodSandboxId:f09e9cfc17c5f6ebfd6f1ca8254a7fbd68a9380935213f14e0c6b2da173fdd82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335837826893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nm28w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fe4d0-1869-49ec-a281-18119a2ad26b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2d721dacbaca5a99bee7fbf879baa4daefb16cb3958142bc5caf2adb228366,PodSandboxId:ebd5ed6cc8e2e1f5024c47dc25d579cfae1ccd301271f7a26dd69dac669d8f67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335766000146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k5qnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf05f7e2-29de-4437-b182-5
3cd65350631,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17baf91a2a7f6358ae63f23dc0895492f2dd397ad7cff6a73b4c8c365f5ad9d,PodSandboxId:5f7bba7b439236b30b841000e022540071e032467e2e35ae33f0ccb9c3d08914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657323926969931
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d96126e303d8ee1f33f434b36ab0933,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d593b5b514c6759744bb5c123d33712566a2bc4944e019c89d91d768832a5f,PodSandboxId:bd8f0f711bacfc15386fb43b22c3fae23cfd42ce00ab99c5f724ac451ea5ddd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657323944015362,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fb3d7de1a23f009227be1c8d40c928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d572687dae0896402a546d4f4dbe24e379b932f68c3e0b3a3c3f8af35ba212c,PodSandboxId:88d5d70ea9c84cadc596ff883126d26cb63ab7e1c27ccc4b824d9132f1606142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657323897457621,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edcb95c4052750ecb4852e1b8a3f6476c996872cf7be8bb2b189ff0bd1bd8b2,PodSandboxId:cd3bc6dc0b59b37c6e9fa23fc31cd8430d2d7a7cc7a06f3b03ec5e1d794c97c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657323838111643,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836c94ed11c93508b4334cad9fff3a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbb4963f9ccff3d77ba5a2b01e3f98fc059d4d696e19e10bc46d45523e3b44,PodSandboxId:8722d35792d91589df21a449dd3ad27d7753ab57bafb835a3eb16ca6f2795c6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657038716131875,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29158a96-daf1-45f7-b956-e2ab930490ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.886345000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f13d242-8281-4689-842c-fb3a90dd6067 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.886422572Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f13d242-8281-4689-842c-fb3a90dd6067 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.887732003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25f568a1-43e4-4914-b512-efdac7efda84 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.890253221Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658328889589800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25f568a1-43e4-4914-b512-efdac7efda84 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.893491156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36b6d7f6-e694-403b-8566-2321b60fb9ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.893594578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36b6d7f6-e694-403b-8566-2321b60fb9ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.893802427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:503e9df483a627bb3855cc575952c002326a861e96829096b407406eb5983f09,PodSandboxId:c85483bcc56c2a0d0777da1baa3907a957edc62433f65ad25cb4383190b20390,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657336334630107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-254cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb9ed10c9dfa9f82fa319eec929efc17c724147ce4ddb13fff131efd549474,PodSandboxId:ff00e43e463e38e4145902c004d052b6a2bcc839284155c096edb200afb06d1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657336327637917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5128eea6-234c-4aea-a9b7-835e840a31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c6b70d58c277e3b0387e086c84726ddcc3a03ccf7b66d2e89d918282324a2e,PodSandboxId:f09e9cfc17c5f6ebfd6f1ca8254a7fbd68a9380935213f14e0c6b2da173fdd82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335837826893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nm28w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fe4d0-1869-49ec-a281-18119a2ad26b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2d721dacbaca5a99bee7fbf879baa4daefb16cb3958142bc5caf2adb228366,PodSandboxId:ebd5ed6cc8e2e1f5024c47dc25d579cfae1ccd301271f7a26dd69dac669d8f67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335766000146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k5qnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf05f7e2-29de-4437-b182-5
3cd65350631,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17baf91a2a7f6358ae63f23dc0895492f2dd397ad7cff6a73b4c8c365f5ad9d,PodSandboxId:5f7bba7b439236b30b841000e022540071e032467e2e35ae33f0ccb9c3d08914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657323926969931
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d96126e303d8ee1f33f434b36ab0933,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d593b5b514c6759744bb5c123d33712566a2bc4944e019c89d91d768832a5f,PodSandboxId:bd8f0f711bacfc15386fb43b22c3fae23cfd42ce00ab99c5f724ac451ea5ddd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657323944015362,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fb3d7de1a23f009227be1c8d40c928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d572687dae0896402a546d4f4dbe24e379b932f68c3e0b3a3c3f8af35ba212c,PodSandboxId:88d5d70ea9c84cadc596ff883126d26cb63ab7e1c27ccc4b824d9132f1606142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657323897457621,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edcb95c4052750ecb4852e1b8a3f6476c996872cf7be8bb2b189ff0bd1bd8b2,PodSandboxId:cd3bc6dc0b59b37c6e9fa23fc31cd8430d2d7a7cc7a06f3b03ec5e1d794c97c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657323838111643,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836c94ed11c93508b4334cad9fff3a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbb4963f9ccff3d77ba5a2b01e3f98fc059d4d696e19e10bc46d45523e3b44,PodSandboxId:8722d35792d91589df21a449dd3ad27d7753ab57bafb835a3eb16ca6f2795c6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657038716131875,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36b6d7f6-e694-403b-8566-2321b60fb9ad name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.930119927Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0946ac45-159d-431b-8f9f-0b65fee6528d name=/runtime.v1.RuntimeService/Version
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.930239913Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0946ac45-159d-431b-8f9f-0b65fee6528d name=/runtime.v1.RuntimeService/Version
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.931538225Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f81e64fd-e498-475e-9712-76f89e4a17ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.931922015Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658328931900305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f81e64fd-e498-475e-9712-76f89e4a17ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.932477645Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be22e0f9-b050-43fd-b56c-06ee0522d511 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.932530820Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be22e0f9-b050-43fd-b56c-06ee0522d511 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.932718255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:503e9df483a627bb3855cc575952c002326a861e96829096b407406eb5983f09,PodSandboxId:c85483bcc56c2a0d0777da1baa3907a957edc62433f65ad25cb4383190b20390,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657336334630107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-254cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb9ed10c9dfa9f82fa319eec929efc17c724147ce4ddb13fff131efd549474,PodSandboxId:ff00e43e463e38e4145902c004d052b6a2bcc839284155c096edb200afb06d1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657336327637917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5128eea6-234c-4aea-a9b7-835e840a31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c6b70d58c277e3b0387e086c84726ddcc3a03ccf7b66d2e89d918282324a2e,PodSandboxId:f09e9cfc17c5f6ebfd6f1ca8254a7fbd68a9380935213f14e0c6b2da173fdd82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335837826893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nm28w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fe4d0-1869-49ec-a281-18119a2ad26b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2d721dacbaca5a99bee7fbf879baa4daefb16cb3958142bc5caf2adb228366,PodSandboxId:ebd5ed6cc8e2e1f5024c47dc25d579cfae1ccd301271f7a26dd69dac669d8f67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335766000146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k5qnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf05f7e2-29de-4437-b182-5
3cd65350631,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17baf91a2a7f6358ae63f23dc0895492f2dd397ad7cff6a73b4c8c365f5ad9d,PodSandboxId:5f7bba7b439236b30b841000e022540071e032467e2e35ae33f0ccb9c3d08914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657323926969931
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d96126e303d8ee1f33f434b36ab0933,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d593b5b514c6759744bb5c123d33712566a2bc4944e019c89d91d768832a5f,PodSandboxId:bd8f0f711bacfc15386fb43b22c3fae23cfd42ce00ab99c5f724ac451ea5ddd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657323944015362,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fb3d7de1a23f009227be1c8d40c928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d572687dae0896402a546d4f4dbe24e379b932f68c3e0b3a3c3f8af35ba212c,PodSandboxId:88d5d70ea9c84cadc596ff883126d26cb63ab7e1c27ccc4b824d9132f1606142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657323897457621,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edcb95c4052750ecb4852e1b8a3f6476c996872cf7be8bb2b189ff0bd1bd8b2,PodSandboxId:cd3bc6dc0b59b37c6e9fa23fc31cd8430d2d7a7cc7a06f3b03ec5e1d794c97c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657323838111643,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836c94ed11c93508b4334cad9fff3a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1cbb4963f9ccff3d77ba5a2b01e3f98fc059d4d696e19e10bc46d45523e3b44,PodSandboxId:8722d35792d91589df21a449dd3ad27d7753ab57bafb835a3eb16ca6f2795c6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657038716131875,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be22e0f9-b050-43fd-b56c-06ee0522d511 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.953547470Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c04082d7-ca44-4395-a4e7-c60e9db10876 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.953776776Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c85483bcc56c2a0d0777da1baa3907a957edc62433f65ad25cb4383190b20390,Metadata:&PodSandboxMetadata{Name:kube-proxy-254cb,Uid:e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657335998994157,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-254cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T17:42:14.191339860Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ff00e43e463e38e4145902c004d052b6a2bcc839284155c096edb200afb06d1b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5128eea6-234c-4aea-a9b7-835e
840a31a3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657335886259367,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5128eea6-234c-4aea-a9b7-835e840a31a3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-14T17:42:15.277822181Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:42add418f61cf0b073a97c353641d30a3cd271c03729c17cb70c3d39f78e3eb9,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-5q86r,Uid:849df692-9f8e-455e-b209-25801151513b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657335846439610,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-5q86r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 849df692-9f8e-455e-b209-25801151513b,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T17:42:15.538257266Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f09e9cfc17c5f6ebfd6f1ca8254a7fbd68a9380935213f14e0c6b2da173fdd82,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8
f-nm28w,Uid:ba1fe4d0-1869-49ec-a281-18119a2ad26b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657335351471897,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-nm28w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fe4d0-1869-49ec-a281-18119a2ad26b,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T17:42:14.422039834Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ebd5ed6cc8e2e1f5024c47dc25d579cfae1ccd301271f7a26dd69dac669d8f67,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-k5qnj,Uid:cf05f7e2-29de-4437-b182-53cd65350631,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657335306364903,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-k5qnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf05f7e2-29de-4437-b182-53cd65350631,k8s-app: kube-dns,
pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-14T17:42:14.397682444Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88d5d70ea9c84cadc596ff883126d26cb63ab7e1c27ccc4b824d9132f1606142,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-885666,Uid:714af192e9e140702e947c3dbe222882,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1723657323709365067,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.184:8444,kubernetes.io/config.hash: 714af192e9e140702e947c3dbe222882,kubernetes.io/config.seen: 2024-08-14T17:42:03.256206295Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{I
d:5f7bba7b439236b30b841000e022540071e032467e2e35ae33f0ccb9c3d08914,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-885666,Uid:4d96126e303d8ee1f33f434b36ab0933,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657323702871364,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d96126e303d8ee1f33f434b36ab0933,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4d96126e303d8ee1f33f434b36ab0933,kubernetes.io/config.seen: 2024-08-14T17:42:03.256208669Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cd3bc6dc0b59b37c6e9fa23fc31cd8430d2d7a7cc7a06f3b03ec5e1d794c97c1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-885666,Uid:836c94ed11c93508b4334cad9fff3a9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657323698663
084,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836c94ed11c93508b4334cad9fff3a9c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 836c94ed11c93508b4334cad9fff3a9c,kubernetes.io/config.seen: 2024-08-14T17:42:03.256207519Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bd8f0f711bacfc15386fb43b22c3fae23cfd42ce00ab99c5f724ac451ea5ddd8,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-885666,Uid:62fb3d7de1a23f009227be1c8d40c928,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723657323696195587,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fb3d7de1a23f009227be1c8d40c928,tier: control-plane,},Annotat
ions:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.184:2379,kubernetes.io/config.hash: 62fb3d7de1a23f009227be1c8d40c928,kubernetes.io/config.seen: 2024-08-14T17:42:03.256202798Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c04082d7-ca44-4395-a4e7-c60e9db10876 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.954500214Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80a517f8-cd94-4c6f-a1ad-9215dabd83cb name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.954574350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80a517f8-cd94-4c6f-a1ad-9215dabd83cb name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:58:48 default-k8s-diff-port-885666 crio[732]: time="2024-08-14 17:58:48.954754781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:503e9df483a627bb3855cc575952c002326a861e96829096b407406eb5983f09,PodSandboxId:c85483bcc56c2a0d0777da1baa3907a957edc62433f65ad25cb4383190b20390,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723657336334630107,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-254cb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bbb9ed10c9dfa9f82fa319eec929efc17c724147ce4ddb13fff131efd549474,PodSandboxId:ff00e43e463e38e4145902c004d052b6a2bcc839284155c096edb200afb06d1b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657336327637917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5128eea6-234c-4aea-a9b7-835e840a31a3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c6b70d58c277e3b0387e086c84726ddcc3a03ccf7b66d2e89d918282324a2e,PodSandboxId:f09e9cfc17c5f6ebfd6f1ca8254a7fbd68a9380935213f14e0c6b2da173fdd82,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335837826893,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-nm28w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba1fe4d0-1869-49ec-a281-18119a2ad26b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp
\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba2d721dacbaca5a99bee7fbf879baa4daefb16cb3958142bc5caf2adb228366,PodSandboxId:ebd5ed6cc8e2e1f5024c47dc25d579cfae1ccd301271f7a26dd69dac669d8f67,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657335766000146,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k5qnj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf05f7e2-29de-4437-b182-5
3cd65350631,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d17baf91a2a7f6358ae63f23dc0895492f2dd397ad7cff6a73b4c8c365f5ad9d,PodSandboxId:5f7bba7b439236b30b841000e022540071e032467e2e35ae33f0ccb9c3d08914,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657323926969931
,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d96126e303d8ee1f33f434b36ab0933,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d593b5b514c6759744bb5c123d33712566a2bc4944e019c89d91d768832a5f,PodSandboxId:bd8f0f711bacfc15386fb43b22c3fae23cfd42ce00ab99c5f724ac451ea5ddd8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657323944015362,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62fb3d7de1a23f009227be1c8d40c928,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d572687dae0896402a546d4f4dbe24e379b932f68c3e0b3a3c3f8af35ba212c,PodSandboxId:88d5d70ea9c84cadc596ff883126d26cb63ab7e1c27ccc4b824d9132f1606142,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657323897457621,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 714af192e9e140702e947c3dbe222882,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7edcb95c4052750ecb4852e1b8a3f6476c996872cf7be8bb2b189ff0bd1bd8b2,PodSandboxId:cd3bc6dc0b59b37c6e9fa23fc31cd8430d2d7a7cc7a06f3b03ec5e1d794c97c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657323838111643,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-885666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 836c94ed11c93508b4334cad9fff3a9c,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80a517f8-cd94-4c6f-a1ad-9215dabd83cb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	503e9df483a62       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   16 minutes ago      Running             kube-proxy                0                   c85483bcc56c2       kube-proxy-254cb
	2bbb9ed10c9df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   ff00e43e463e3       storage-provisioner
	77c6b70d58c27       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   f09e9cfc17c5f       coredns-6f6b679f8f-nm28w
	ba2d721dacbac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   ebd5ed6cc8e2e       coredns-6f6b679f8f-k5qnj
	f4d593b5b514c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   bd8f0f711bacf       etcd-default-k8s-diff-port-885666
	d17baf91a2a7f       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   16 minutes ago      Running             kube-scheduler            2                   5f7bba7b43923       kube-scheduler-default-k8s-diff-port-885666
	2d572687dae08       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   16 minutes ago      Running             kube-apiserver            2                   88d5d70ea9c84       kube-apiserver-default-k8s-diff-port-885666
	7edcb95c40527       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   16 minutes ago      Running             kube-controller-manager   2                   cd3bc6dc0b59b       kube-controller-manager-default-k8s-diff-port-885666
	b1cbb4963f9cc       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 minutes ago      Exited              kube-apiserver            1                   8722d35792d91       kube-apiserver-default-k8s-diff-port-885666
	
	
	==> coredns [77c6b70d58c277e3b0387e086c84726ddcc3a03ccf7b66d2e89d918282324a2e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ba2d721dacbaca5a99bee7fbf879baa4daefb16cb3958142bc5caf2adb228366] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-885666
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-885666
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=default-k8s-diff-port-885666
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T17_42_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 17:42:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-885666
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 17:58:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 17:57:35 +0000   Wed, 14 Aug 2024 17:42:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 17:57:35 +0000   Wed, 14 Aug 2024 17:42:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 17:57:35 +0000   Wed, 14 Aug 2024 17:42:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 17:57:35 +0000   Wed, 14 Aug 2024 17:42:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.184
	  Hostname:    default-k8s-diff-port-885666
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 77f491de9fd64d2f8fc1bc7b2c4fbd7d
	  System UUID:                77f491de-9fd6-4d2f-8fc1-bc7b2c4fbd7d
	  Boot ID:                    ee6ef590-015f-4ef0-8f7e-d46cb391e6b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-k5qnj                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-6f6b679f8f-nm28w                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-885666                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-885666             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-885666    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-254cb                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-885666             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-6867b74b74-5q86r                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node default-k8s-diff-port-885666 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node default-k8s-diff-port-885666 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node default-k8s-diff-port-885666 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node default-k8s-diff-port-885666 event: Registered Node default-k8s-diff-port-885666 in Controller
	
	
	==> dmesg <==
	[  +0.053045] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038602] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.808343] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.858660] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.529056] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug14 17:37] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.066165] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067431] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.188815] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.150186] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.269086] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +4.112439] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +1.991859] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +0.057932] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.516594] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.614083] kauditd_printk_skb: 85 callbacks suppressed
	[Aug14 17:42] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.187600] systemd-fstab-generator[2624]: Ignoring "noauto" option for root device
	[  +4.708149] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.350341] systemd-fstab-generator[2942]: Ignoring "noauto" option for root device
	[  +5.905194] systemd-fstab-generator[3070]: Ignoring "noauto" option for root device
	[  +0.088423] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.898027] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [f4d593b5b514c6759744bb5c123d33712566a2bc4944e019c89d91d768832a5f] <==
	{"level":"warn","ts":"2024-08-14T17:57:33.266617Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.784399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T17:57:33.266711Z","caller":"traceutil/trace.go:171","msg":"trace[347134983] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1201; }","duration":"185.951026ms","start":"2024-08-14T17:57:33.080749Z","end":"2024-08-14T17:57:33.266700Z","steps":["trace[347134983] 'agreement among raft nodes before linearized reading'  (duration: 185.752452ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T17:57:33.266922Z","caller":"traceutil/trace.go:171","msg":"trace[814779385] transaction","detail":"{read_only:false; response_revision:1201; number_of_response:1; }","duration":"250.642372ms","start":"2024-08-14T17:57:33.016257Z","end":"2024-08-14T17:57:33.266899Z","steps":["trace[814779385] 'process raft request'  (duration: 250.00299ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T17:57:33.527016Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.394082ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T17:57:33.527135Z","caller":"traceutil/trace.go:171","msg":"trace[494039712] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1201; }","duration":"201.584181ms","start":"2024-08-14T17:57:33.325529Z","end":"2024-08-14T17:57:33.527113Z","steps":["trace[494039712] 'range keys from in-memory index tree'  (duration: 201.380648ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T17:57:33.679938Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.49138ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7583940078813489015 > lease_revoke:<id:693f9151fb86a719>","response":"size:29"}
	{"level":"info","ts":"2024-08-14T17:57:33.680015Z","caller":"traceutil/trace.go:171","msg":"trace[611965795] linearizableReadLoop","detail":"{readStateIndex:1401; appliedIndex:1400; }","duration":"187.484017ms","start":"2024-08-14T17:57:33.492516Z","end":"2024-08-14T17:57:33.680000Z","steps":["trace[611965795] 'read index received'  (duration: 33.729304ms)","trace[611965795] 'applied index is now lower than readState.Index'  (duration: 153.753871ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T17:57:33.680109Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.590654ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T17:57:33.680139Z","caller":"traceutil/trace.go:171","msg":"trace[1323920456] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1201; }","duration":"187.628382ms","start":"2024-08-14T17:57:33.492505Z","end":"2024-08-14T17:57:33.680134Z","steps":["trace[1323920456] 'agreement among raft nodes before linearized reading'  (duration: 187.569587ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T17:57:33.680361Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.120531ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T17:57:33.680394Z","caller":"traceutil/trace.go:171","msg":"trace[1739045681] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1201; }","duration":"153.156152ms","start":"2024-08-14T17:57:33.527231Z","end":"2024-08-14T17:57:33.680387Z","steps":["trace[1739045681] 'agreement among raft nodes before linearized reading'  (duration: 153.108759ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T17:58:27.668684Z","caller":"traceutil/trace.go:171","msg":"trace[601948024] linearizableReadLoop","detail":"{readStateIndex:1455; appliedIndex:1454; }","duration":"114.906334ms","start":"2024-08-14T17:58:27.553741Z","end":"2024-08-14T17:58:27.668647Z","steps":["trace[601948024] 'read index received'  (duration: 114.736838ms)","trace[601948024] 'applied index is now lower than readState.Index'  (duration: 168.701µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T17:58:27.668921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.088771ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T17:58:27.668946Z","caller":"traceutil/trace.go:171","msg":"trace[1345724524] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1245; }","duration":"115.202454ms","start":"2024-08-14T17:58:27.553736Z","end":"2024-08-14T17:58:27.668939Z","steps":["trace[1345724524] 'agreement among raft nodes before linearized reading'  (duration: 115.063466ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T17:58:27.669203Z","caller":"traceutil/trace.go:171","msg":"trace[16437928] transaction","detail":"{read_only:false; response_revision:1245; number_of_response:1; }","duration":"123.794248ms","start":"2024-08-14T17:58:27.545396Z","end":"2024-08-14T17:58:27.669191Z","steps":["trace[16437928] 'process raft request'  (duration: 123.129204ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T17:58:28.446841Z","caller":"traceutil/trace.go:171","msg":"trace[821195406] linearizableReadLoop","detail":"{readStateIndex:1456; appliedIndex:1455; }","duration":"121.006992ms","start":"2024-08-14T17:58:28.325785Z","end":"2024-08-14T17:58:28.446792Z","steps":["trace[821195406] 'read index received'  (duration: 120.528195ms)","trace[821195406] 'applied index is now lower than readState.Index'  (duration: 477.885µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T17:58:28.446978Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.167042ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T17:58:28.447009Z","caller":"traceutil/trace.go:171","msg":"trace[1615048754] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1245; }","duration":"121.221857ms","start":"2024-08-14T17:58:28.325778Z","end":"2024-08-14T17:58:28.447000Z","steps":["trace[1615048754] 'agreement among raft nodes before linearized reading'  (duration: 121.147072ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T17:58:28.867733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.486107ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7583940078813489341 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.184\" mod_revision:1238 > success:<request_put:<key:\"/registry/masterleases/192.168.50.184\" value_size:67 lease:7583940078813489338 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.184\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-14T17:58:28.867985Z","caller":"traceutil/trace.go:171","msg":"trace[530627783] linearizableReadLoop","detail":"{readStateIndex:1457; appliedIndex:1456; }","duration":"379.439264ms","start":"2024-08-14T17:58:28.488529Z","end":"2024-08-14T17:58:28.867968Z","steps":["trace[530627783] 'read index received'  (duration: 261.936756ms)","trace[530627783] 'applied index is now lower than readState.Index'  (duration: 117.500105ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-14T17:58:28.868058Z","caller":"traceutil/trace.go:171","msg":"trace[1394256791] transaction","detail":"{read_only:false; response_revision:1246; number_of_response:1; }","duration":"419.156274ms","start":"2024-08-14T17:58:28.448884Z","end":"2024-08-14T17:58:28.868041Z","steps":["trace[1394256791] 'process raft request'  (duration: 301.580465ms)","trace[1394256791] 'compare'  (duration: 116.140457ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T17:58:28.868188Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"379.631792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T17:58:28.868227Z","caller":"traceutil/trace.go:171","msg":"trace[1008390238] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1246; }","duration":"379.711764ms","start":"2024-08-14T17:58:28.488507Z","end":"2024-08-14T17:58:28.868219Z","steps":["trace[1008390238] 'agreement among raft nodes before linearized reading'  (duration: 379.583544ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T17:58:28.868259Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T17:58:28.488464Z","time spent":"379.786136ms","remote":"127.0.0.1:52750","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-14T17:58:28.868376Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T17:58:28.448864Z","time spent":"419.279306ms","remote":"127.0.0.1:52582","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.184\" mod_revision:1238 > success:<request_put:<key:\"/registry/masterleases/192.168.50.184\" value_size:67 lease:7583940078813489338 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.184\" > >"}
	
	
	==> kernel <==
	 17:58:49 up 21 min,  0 users,  load average: 0.04, 0.08, 0.08
	Linux default-k8s-diff-port-885666 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2d572687dae0896402a546d4f4dbe24e379b932f68c3e0b3a3c3f8af35ba212c] <==
	I0814 17:55:07.536177       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:55:07.536187       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:57:06.533101       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:57:06.533465       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0814 17:57:07.534967       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:57:07.535017       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0814 17:57:07.535122       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:57:07.535298       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 17:57:07.536139       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:57:07.537256       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:58:07.537344       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:58:07.537463       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0814 17:58:07.537360       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:58:07.537531       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 17:58:07.539423       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:58:07.539493       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [b1cbb4963f9ccff3d77ba5a2b01e3f98fc059d4d696e19e10bc46d45523e3b44] <==
	W0814 17:41:58.729046       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.828527       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.843405       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.871904       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.880374       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.882776       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.885143       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.894518       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.944913       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.956906       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:58.985443       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.020001       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.021375       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.054371       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.059235       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.068929       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.070309       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.136364       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.158235       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.241657       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.298615       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.316570       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.433264       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.460771       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:41:59.807237       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [7edcb95c4052750ecb4852e1b8a3f6476c996872cf7be8bb2b189ff0bd1bd8b2] <==
	I0814 17:53:39.967756       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="59.416µs"
	E0814 17:53:43.535082       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:53:44.121926       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:54:13.541201       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:54:14.130925       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:54:43.547389       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:54:44.145227       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:55:13.554560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:55:14.153831       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:55:43.560624       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:55:44.161404       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:56:13.566525       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:56:14.168813       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:56:43.572812       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:56:44.183450       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:57:13.580890       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:57:14.190542       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:57:35.120682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-885666"
	E0814 17:57:43.587632       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:57:44.198871       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:58:13.595073       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:58:14.208076       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:58:36.981084       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="337.713µs"
	E0814 17:58:43.601345       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:58:44.221922       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [503e9df483a627bb3855cc575952c002326a861e96829096b407406eb5983f09] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 17:42:16.576482       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 17:42:16.586685       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.184"]
	E0814 17:42:16.586825       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 17:42:16.621957       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 17:42:16.622042       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 17:42:16.622082       1 server_linux.go:169] "Using iptables Proxier"
	I0814 17:42:16.624893       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 17:42:16.625222       1 server.go:483] "Version info" version="v1.31.0"
	I0814 17:42:16.625251       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:42:16.626664       1 config.go:197] "Starting service config controller"
	I0814 17:42:16.626712       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 17:42:16.626732       1 config.go:104] "Starting endpoint slice config controller"
	I0814 17:42:16.626736       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 17:42:16.629569       1 config.go:326] "Starting node config controller"
	I0814 17:42:16.629595       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 17:42:16.727230       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 17:42:16.727379       1 shared_informer.go:320] Caches are synced for service config
	I0814 17:42:16.730227       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d17baf91a2a7f6358ae63f23dc0895492f2dd397ad7cff6a73b4c8c365f5ad9d] <==
	W0814 17:42:06.597927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 17:42:06.598071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:06.598302       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 17:42:06.598358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:06.598377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 17:42:06.598509       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:06.598319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 17:42:06.598626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:06.598850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 17:42:06.598940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:07.417577       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 17:42:07.417683       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0814 17:42:07.433705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 17:42:07.433899       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:07.497430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 17:42:07.497630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:07.511714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 17:42:07.511761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:07.583309       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 17:42:07.583361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:07.743341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0814 17:42:07.743384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:07.768782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0814 17:42:07.768980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0814 17:42:10.088527       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 17:57:49 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:57:49.244538    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658269244058652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:55 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:57:55.953698    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5q86r" podUID="849df692-9f8e-455e-b209-25801151513b"
	Aug 14 17:57:59 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:57:59.246120    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658279245643302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:59 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:57:59.246189    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658279245643302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:58:08 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:08.983338    2949 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 17:58:08 default-k8s-diff-port-885666 kubelet[2949]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 17:58:08 default-k8s-diff-port-885666 kubelet[2949]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 17:58:08 default-k8s-diff-port-885666 kubelet[2949]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 17:58:08 default-k8s-diff-port-885666 kubelet[2949]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 17:58:09 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:09.248612    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658289248229257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:58:09 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:09.248666    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658289248229257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:58:09 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:09.954735    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5q86r" podUID="849df692-9f8e-455e-b209-25801151513b"
	Aug 14 17:58:19 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:19.251714    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658299251009710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:58:19 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:19.251798    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658299251009710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:58:21 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:21.970424    2949 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 14 17:58:21 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:21.970837    2949 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 14 17:58:21 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:21.971680    2949 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8gqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-5q86r_kube-system(849df692-9f8e-455e-b209-25801151513b): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 14 17:58:21 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:21.973034    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-5q86r" podUID="849df692-9f8e-455e-b209-25801151513b"
	Aug 14 17:58:29 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:29.253621    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658309253278500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:58:29 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:29.253653    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658309253278500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:58:36 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:36.954896    2949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-5q86r" podUID="849df692-9f8e-455e-b209-25801151513b"
	Aug 14 17:58:39 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:39.255847    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658319255345938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:58:39 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:39.255888    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658319255345938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:58:49 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:49.257887    2949 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658329257461322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:58:49 default-k8s-diff-port-885666 kubelet[2949]: E0814 17:58:49.257916    2949 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658329257461322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2bbb9ed10c9dfa9f82fa319eec929efc17c724147ce4ddb13fff131efd549474] <==
	I0814 17:42:16.445308       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 17:42:16.478240       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 17:42:16.478345       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 17:42:16.492098       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 17:42:16.493070       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-885666_b4c1d616-34c5-489c-b574-4d9c19c202f2!
	I0814 17:42:16.496474       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a256549-c7e3-4b8b-b19c-b3b2b3d68570", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-885666_b4c1d616-34c5-489c-b574-4d9c19c202f2 became leader
	I0814 17:42:16.594241       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-885666_b4c1d616-34c5-489c-b574-4d9c19c202f2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-885666 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-5q86r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-885666 describe pod metrics-server-6867b74b74-5q86r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-885666 describe pod metrics-server-6867b74b74-5q86r: exit status 1 (58.621351ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-5q86r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-885666 describe pod metrics-server-6867b74b74-5q86r: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (442.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (340.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-545149 -n no-preload-545149
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-14 17:57:53.756875685 +0000 UTC m=+6507.442158480
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-545149 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-545149 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.53µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-545149 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-545149 -n no-preload-545149
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-545149 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-545149 logs -n 25: (1.312010021s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-984053 sudo find                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo crio                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-984053                                       | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-005029 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | disable-driver-mounts-005029                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:30 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-545149             | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-309673            | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-885666  | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC | 14 Aug 24 17:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC |                     |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-545149                  | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-505584        | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC | 14 Aug 24 17:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-309673                 | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-885666       | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:42 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-505584             | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC | 14 Aug 24 17:57 UTC |
	| start   | -p newest-cni-471541 --memory=2200 --alsologtostderr   | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC | 14 Aug 24 17:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-471541             | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC | 14 Aug 24 17:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-471541                                   | newest-cni-471541            | jenkins | v1.33.1 | 14 Aug 24 17:57 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 17:57:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 17:57:05.657506   86299 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:57:05.657782   86299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:57:05.657792   86299 out.go:304] Setting ErrFile to fd 2...
	I0814 17:57:05.657798   86299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:57:05.657998   86299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:57:05.658581   86299 out.go:298] Setting JSON to false
	I0814 17:57:05.659605   86299 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":9570,"bootTime":1723648656,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:57:05.659662   86299 start.go:139] virtualization: kvm guest
	I0814 17:57:05.662552   86299 out.go:177] * [newest-cni-471541] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:57:05.663970   86299 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:57:05.663967   86299 notify.go:220] Checking for updates...
	I0814 17:57:05.665550   86299 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:57:05.666948   86299 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:57:05.668170   86299 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:57:05.669321   86299 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:57:05.670447   86299 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:57:05.671933   86299 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:57:05.672015   86299 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:57:05.672096   86299 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:57:05.672164   86299 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:57:05.707750   86299 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 17:57:05.708761   86299 start.go:297] selected driver: kvm2
	I0814 17:57:05.708778   86299 start.go:901] validating driver "kvm2" against <nil>
	I0814 17:57:05.708798   86299 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:57:05.709845   86299 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:57:05.709957   86299 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:57:05.724761   86299 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:57:05.724805   86299 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0814 17:57:05.724831   86299 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0814 17:57:05.725080   86299 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0814 17:57:05.725145   86299 cni.go:84] Creating CNI manager for ""
	I0814 17:57:05.725157   86299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:57:05.725164   86299 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 17:57:05.725216   86299 start.go:340] cluster config:
	{Name:newest-cni-471541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:57:05.725322   86299 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:57:05.727106   86299 out.go:177] * Starting "newest-cni-471541" primary control-plane node in "newest-cni-471541" cluster
	I0814 17:57:05.728058   86299 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:57:05.728087   86299 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:57:05.728093   86299 cache.go:56] Caching tarball of preloaded images
	I0814 17:57:05.728153   86299 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:57:05.728163   86299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0814 17:57:05.728246   86299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/config.json ...
	I0814 17:57:05.728261   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/config.json: {Name:mk84f144973bc92a6534aa2eb616796cf2d1d274 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:05.728380   86299 start.go:360] acquireMachinesLock for newest-cni-471541: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:57:05.728418   86299 start.go:364] duration metric: took 20.18µs to acquireMachinesLock for "newest-cni-471541"
	I0814 17:57:05.728434   86299 start.go:93] Provisioning new machine with config: &{Name:newest-cni-471541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:57:05.728481   86299 start.go:125] createHost starting for "" (driver="kvm2")
	I0814 17:57:05.729912   86299 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0814 17:57:05.730078   86299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:57:05.730119   86299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:57:05.744773   86299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I0814 17:57:05.745230   86299 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:57:05.745769   86299 main.go:141] libmachine: Using API Version  1
	I0814 17:57:05.745796   86299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:57:05.746130   86299 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:57:05.746294   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetMachineName
	I0814 17:57:05.746466   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:05.746619   86299 start.go:159] libmachine.API.Create for "newest-cni-471541" (driver="kvm2")
	I0814 17:57:05.746647   86299 client.go:168] LocalClient.Create starting
	I0814 17:57:05.746683   86299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem
	I0814 17:57:05.746721   86299 main.go:141] libmachine: Decoding PEM data...
	I0814 17:57:05.746737   86299 main.go:141] libmachine: Parsing certificate...
	I0814 17:57:05.746794   86299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem
	I0814 17:57:05.746813   86299 main.go:141] libmachine: Decoding PEM data...
	I0814 17:57:05.746826   86299 main.go:141] libmachine: Parsing certificate...
	I0814 17:57:05.746841   86299 main.go:141] libmachine: Running pre-create checks...
	I0814 17:57:05.746853   86299 main.go:141] libmachine: (newest-cni-471541) Calling .PreCreateCheck
	I0814 17:57:05.747139   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetConfigRaw
	I0814 17:57:05.747496   86299 main.go:141] libmachine: Creating machine...
	I0814 17:57:05.747509   86299 main.go:141] libmachine: (newest-cni-471541) Calling .Create
	I0814 17:57:05.747633   86299 main.go:141] libmachine: (newest-cni-471541) Creating KVM machine...
	I0814 17:57:05.748816   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found existing default KVM network
	I0814 17:57:05.749904   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:05.749762   86322 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:63:32:a0} reservation:<nil>}
	I0814 17:57:05.750737   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:05.750671   86322 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:90:b2:95} reservation:<nil>}
	I0814 17:57:05.751496   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:05.751434   86322 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:8e:13:0f} reservation:<nil>}
	I0814 17:57:05.752542   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:05.752449   86322 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000306fb0}
	I0814 17:57:05.752572   86299 main.go:141] libmachine: (newest-cni-471541) DBG | created network xml: 
	I0814 17:57:05.752596   86299 main.go:141] libmachine: (newest-cni-471541) DBG | <network>
	I0814 17:57:05.752609   86299 main.go:141] libmachine: (newest-cni-471541) DBG |   <name>mk-newest-cni-471541</name>
	I0814 17:57:05.752618   86299 main.go:141] libmachine: (newest-cni-471541) DBG |   <dns enable='no'/>
	I0814 17:57:05.752629   86299 main.go:141] libmachine: (newest-cni-471541) DBG |   
	I0814 17:57:05.752636   86299 main.go:141] libmachine: (newest-cni-471541) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0814 17:57:05.752645   86299 main.go:141] libmachine: (newest-cni-471541) DBG |     <dhcp>
	I0814 17:57:05.752652   86299 main.go:141] libmachine: (newest-cni-471541) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0814 17:57:05.752659   86299 main.go:141] libmachine: (newest-cni-471541) DBG |     </dhcp>
	I0814 17:57:05.752665   86299 main.go:141] libmachine: (newest-cni-471541) DBG |   </ip>
	I0814 17:57:05.752669   86299 main.go:141] libmachine: (newest-cni-471541) DBG |   
	I0814 17:57:05.752674   86299 main.go:141] libmachine: (newest-cni-471541) DBG | </network>
	I0814 17:57:05.752678   86299 main.go:141] libmachine: (newest-cni-471541) DBG | 
	I0814 17:57:05.757647   86299 main.go:141] libmachine: (newest-cni-471541) DBG | trying to create private KVM network mk-newest-cni-471541 192.168.72.0/24...
	I0814 17:57:05.826472   86299 main.go:141] libmachine: (newest-cni-471541) DBG | private KVM network mk-newest-cni-471541 192.168.72.0/24 created
	I0814 17:57:05.826543   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:05.826439   86322 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:57:05.826566   86299 main.go:141] libmachine: (newest-cni-471541) Setting up store path in /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541 ...
	I0814 17:57:05.826592   86299 main.go:141] libmachine: (newest-cni-471541) Building disk image from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 17:57:05.826634   86299 main.go:141] libmachine: (newest-cni-471541) Downloading /home/jenkins/minikube-integration/19446-13977/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso...
	I0814 17:57:06.074297   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:06.074149   86322 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa...
	I0814 17:57:06.297180   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:06.297037   86322 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/newest-cni-471541.rawdisk...
	I0814 17:57:06.297218   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Writing magic tar header
	I0814 17:57:06.297238   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Writing SSH key tar header
	I0814 17:57:06.297259   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:06.297171   86322 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541 ...
	I0814 17:57:06.297336   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541
	I0814 17:57:06.297374   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube/machines
	I0814 17:57:06.297390   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:57:06.297409   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19446-13977
	I0814 17:57:06.297418   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0814 17:57:06.297430   86299 main.go:141] libmachine: (newest-cni-471541) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541 (perms=drwx------)
	I0814 17:57:06.297443   86299 main.go:141] libmachine: (newest-cni-471541) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube/machines (perms=drwxr-xr-x)
	I0814 17:57:06.297457   86299 main.go:141] libmachine: (newest-cni-471541) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977/.minikube (perms=drwxr-xr-x)
	I0814 17:57:06.297469   86299 main.go:141] libmachine: (newest-cni-471541) Setting executable bit set on /home/jenkins/minikube-integration/19446-13977 (perms=drwxrwxr-x)
	I0814 17:57:06.297483   86299 main.go:141] libmachine: (newest-cni-471541) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0814 17:57:06.297494   86299 main.go:141] libmachine: (newest-cni-471541) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0814 17:57:06.297506   86299 main.go:141] libmachine: (newest-cni-471541) Creating domain...
	I0814 17:57:06.297516   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home/jenkins
	I0814 17:57:06.297527   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Checking permissions on dir: /home
	I0814 17:57:06.297535   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Skipping /home - not owner
	I0814 17:57:06.298862   86299 main.go:141] libmachine: (newest-cni-471541) define libvirt domain using xml: 
	I0814 17:57:06.298875   86299 main.go:141] libmachine: (newest-cni-471541) <domain type='kvm'>
	I0814 17:57:06.298883   86299 main.go:141] libmachine: (newest-cni-471541)   <name>newest-cni-471541</name>
	I0814 17:57:06.298889   86299 main.go:141] libmachine: (newest-cni-471541)   <memory unit='MiB'>2200</memory>
	I0814 17:57:06.298894   86299 main.go:141] libmachine: (newest-cni-471541)   <vcpu>2</vcpu>
	I0814 17:57:06.298898   86299 main.go:141] libmachine: (newest-cni-471541)   <features>
	I0814 17:57:06.298904   86299 main.go:141] libmachine: (newest-cni-471541)     <acpi/>
	I0814 17:57:06.298911   86299 main.go:141] libmachine: (newest-cni-471541)     <apic/>
	I0814 17:57:06.298916   86299 main.go:141] libmachine: (newest-cni-471541)     <pae/>
	I0814 17:57:06.298923   86299 main.go:141] libmachine: (newest-cni-471541)     
	I0814 17:57:06.298928   86299 main.go:141] libmachine: (newest-cni-471541)   </features>
	I0814 17:57:06.298932   86299 main.go:141] libmachine: (newest-cni-471541)   <cpu mode='host-passthrough'>
	I0814 17:57:06.298941   86299 main.go:141] libmachine: (newest-cni-471541)   
	I0814 17:57:06.298957   86299 main.go:141] libmachine: (newest-cni-471541)   </cpu>
	I0814 17:57:06.298968   86299 main.go:141] libmachine: (newest-cni-471541)   <os>
	I0814 17:57:06.298981   86299 main.go:141] libmachine: (newest-cni-471541)     <type>hvm</type>
	I0814 17:57:06.298989   86299 main.go:141] libmachine: (newest-cni-471541)     <boot dev='cdrom'/>
	I0814 17:57:06.298999   86299 main.go:141] libmachine: (newest-cni-471541)     <boot dev='hd'/>
	I0814 17:57:06.299007   86299 main.go:141] libmachine: (newest-cni-471541)     <bootmenu enable='no'/>
	I0814 17:57:06.299017   86299 main.go:141] libmachine: (newest-cni-471541)   </os>
	I0814 17:57:06.299104   86299 main.go:141] libmachine: (newest-cni-471541)   <devices>
	I0814 17:57:06.299133   86299 main.go:141] libmachine: (newest-cni-471541)     <disk type='file' device='cdrom'>
	I0814 17:57:06.299147   86299 main.go:141] libmachine: (newest-cni-471541)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/boot2docker.iso'/>
	I0814 17:57:06.299162   86299 main.go:141] libmachine: (newest-cni-471541)       <target dev='hdc' bus='scsi'/>
	I0814 17:57:06.299171   86299 main.go:141] libmachine: (newest-cni-471541)       <readonly/>
	I0814 17:57:06.299176   86299 main.go:141] libmachine: (newest-cni-471541)     </disk>
	I0814 17:57:06.299181   86299 main.go:141] libmachine: (newest-cni-471541)     <disk type='file' device='disk'>
	I0814 17:57:06.299189   86299 main.go:141] libmachine: (newest-cni-471541)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0814 17:57:06.299203   86299 main.go:141] libmachine: (newest-cni-471541)       <source file='/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/newest-cni-471541.rawdisk'/>
	I0814 17:57:06.299211   86299 main.go:141] libmachine: (newest-cni-471541)       <target dev='hda' bus='virtio'/>
	I0814 17:57:06.299216   86299 main.go:141] libmachine: (newest-cni-471541)     </disk>
	I0814 17:57:06.299223   86299 main.go:141] libmachine: (newest-cni-471541)     <interface type='network'>
	I0814 17:57:06.299229   86299 main.go:141] libmachine: (newest-cni-471541)       <source network='mk-newest-cni-471541'/>
	I0814 17:57:06.299241   86299 main.go:141] libmachine: (newest-cni-471541)       <model type='virtio'/>
	I0814 17:57:06.299249   86299 main.go:141] libmachine: (newest-cni-471541)     </interface>
	I0814 17:57:06.299253   86299 main.go:141] libmachine: (newest-cni-471541)     <interface type='network'>
	I0814 17:57:06.299261   86299 main.go:141] libmachine: (newest-cni-471541)       <source network='default'/>
	I0814 17:57:06.299267   86299 main.go:141] libmachine: (newest-cni-471541)       <model type='virtio'/>
	I0814 17:57:06.299274   86299 main.go:141] libmachine: (newest-cni-471541)     </interface>
	I0814 17:57:06.299279   86299 main.go:141] libmachine: (newest-cni-471541)     <serial type='pty'>
	I0814 17:57:06.299285   86299 main.go:141] libmachine: (newest-cni-471541)       <target port='0'/>
	I0814 17:57:06.299290   86299 main.go:141] libmachine: (newest-cni-471541)     </serial>
	I0814 17:57:06.299297   86299 main.go:141] libmachine: (newest-cni-471541)     <console type='pty'>
	I0814 17:57:06.299303   86299 main.go:141] libmachine: (newest-cni-471541)       <target type='serial' port='0'/>
	I0814 17:57:06.299314   86299 main.go:141] libmachine: (newest-cni-471541)     </console>
	I0814 17:57:06.299319   86299 main.go:141] libmachine: (newest-cni-471541)     <rng model='virtio'>
	I0814 17:57:06.299344   86299 main.go:141] libmachine: (newest-cni-471541)       <backend model='random'>/dev/random</backend>
	I0814 17:57:06.299358   86299 main.go:141] libmachine: (newest-cni-471541)     </rng>
	I0814 17:57:06.299376   86299 main.go:141] libmachine: (newest-cni-471541)     
	I0814 17:57:06.299389   86299 main.go:141] libmachine: (newest-cni-471541)     
	I0814 17:57:06.299399   86299 main.go:141] libmachine: (newest-cni-471541)   </devices>
	I0814 17:57:06.299410   86299 main.go:141] libmachine: (newest-cni-471541) </domain>
	I0814 17:57:06.299420   86299 main.go:141] libmachine: (newest-cni-471541) 
	I0814 17:57:06.303763   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:84:ea:86 in network default
	I0814 17:57:06.304293   86299 main.go:141] libmachine: (newest-cni-471541) Ensuring networks are active...
	I0814 17:57:06.304318   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:06.305003   86299 main.go:141] libmachine: (newest-cni-471541) Ensuring network default is active
	I0814 17:57:06.305448   86299 main.go:141] libmachine: (newest-cni-471541) Ensuring network mk-newest-cni-471541 is active
	I0814 17:57:06.306017   86299 main.go:141] libmachine: (newest-cni-471541) Getting domain xml...
	I0814 17:57:06.306811   86299 main.go:141] libmachine: (newest-cni-471541) Creating domain...
	I0814 17:57:07.577610   86299 main.go:141] libmachine: (newest-cni-471541) Waiting to get IP...
	I0814 17:57:07.578406   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:07.578822   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:07.578853   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:07.578804   86322 retry.go:31] will retry after 192.490018ms: waiting for machine to come up
	I0814 17:57:07.773297   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:07.773800   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:07.773827   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:07.773757   86322 retry.go:31] will retry after 331.531479ms: waiting for machine to come up
	I0814 17:57:08.107381   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:08.107813   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:08.107832   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:08.107782   86322 retry.go:31] will retry after 443.490585ms: waiting for machine to come up
	I0814 17:57:08.552505   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:08.553075   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:08.553108   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:08.553010   86322 retry.go:31] will retry after 597.669641ms: waiting for machine to come up
	I0814 17:57:09.152293   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:09.152748   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:09.152779   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:09.152702   86322 retry.go:31] will retry after 728.666666ms: waiting for machine to come up
	I0814 17:57:09.882516   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:09.882939   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:09.882969   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:09.882884   86322 retry.go:31] will retry after 681.482968ms: waiting for machine to come up
	I0814 17:57:10.565460   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:10.565874   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:10.565905   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:10.565828   86322 retry.go:31] will retry after 1.190044961s: waiting for machine to come up
	I0814 17:57:11.758291   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:11.758824   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:11.758851   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:11.758775   86322 retry.go:31] will retry after 1.16384016s: waiting for machine to come up
	I0814 17:57:12.924081   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:12.924517   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:12.924539   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:12.924482   86322 retry.go:31] will retry after 1.365508056s: waiting for machine to come up
	I0814 17:57:14.292166   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:14.292626   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:14.292645   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:14.292603   86322 retry.go:31] will retry after 1.879924239s: waiting for machine to come up
	I0814 17:57:16.174619   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:16.175097   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:16.175128   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:16.175054   86322 retry.go:31] will retry after 2.741925753s: waiting for machine to come up
	I0814 17:57:18.919315   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:18.919832   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:18.919856   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:18.919796   86322 retry.go:31] will retry after 2.97592505s: waiting for machine to come up
	I0814 17:57:21.897443   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:21.897938   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find current IP address of domain newest-cni-471541 in network mk-newest-cni-471541
	I0814 17:57:21.897961   86299 main.go:141] libmachine: (newest-cni-471541) DBG | I0814 17:57:21.897889   86322 retry.go:31] will retry after 3.312414184s: waiting for machine to come up
	I0814 17:57:25.213217   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.213827   86299 main.go:141] libmachine: (newest-cni-471541) Found IP for machine: 192.168.72.111
	I0814 17:57:25.213848   86299 main.go:141] libmachine: (newest-cni-471541) Reserving static IP address...
	I0814 17:57:25.213860   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has current primary IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.214217   86299 main.go:141] libmachine: (newest-cni-471541) DBG | unable to find host DHCP lease matching {name: "newest-cni-471541", mac: "52:54:00:ae:15:ce", ip: "192.168.72.111"} in network mk-newest-cni-471541
	I0814 17:57:25.290900   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Getting to WaitForSSH function...
	I0814 17:57:25.290920   86299 main.go:141] libmachine: (newest-cni-471541) Reserved static IP address: 192.168.72.111
	I0814 17:57:25.290930   86299 main.go:141] libmachine: (newest-cni-471541) Waiting for SSH to be available...
	I0814 17:57:25.293509   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.293998   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.294027   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.294199   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Using SSH client type: external
	I0814 17:57:25.294224   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa (-rw-------)
	I0814 17:57:25.294251   86299 main.go:141] libmachine: (newest-cni-471541) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:57:25.294263   86299 main.go:141] libmachine: (newest-cni-471541) DBG | About to run SSH command:
	I0814 17:57:25.294273   86299 main.go:141] libmachine: (newest-cni-471541) DBG | exit 0
	I0814 17:57:25.419450   86299 main.go:141] libmachine: (newest-cni-471541) DBG | SSH cmd err, output: <nil>: 
	I0814 17:57:25.419760   86299 main.go:141] libmachine: (newest-cni-471541) KVM machine creation complete!
	I0814 17:57:25.420099   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetConfigRaw
	I0814 17:57:25.420562   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:25.420751   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:25.420946   86299 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0814 17:57:25.420960   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:57:25.422359   86299 main.go:141] libmachine: Detecting operating system of created instance...
	I0814 17:57:25.422372   86299 main.go:141] libmachine: Waiting for SSH to be available...
	I0814 17:57:25.422378   86299 main.go:141] libmachine: Getting to WaitForSSH function...
	I0814 17:57:25.422384   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:25.424518   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.424903   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.424928   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.425077   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:25.425287   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.425460   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.425590   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:25.425811   86299 main.go:141] libmachine: Using SSH client type: native
	I0814 17:57:25.426030   86299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:57:25.426041   86299 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0814 17:57:25.526484   86299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:57:25.526510   86299 main.go:141] libmachine: Detecting the provisioner...
	I0814 17:57:25.526537   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:25.529297   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.529690   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.529712   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.529967   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:25.530134   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.530265   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.530383   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:25.530591   86299 main.go:141] libmachine: Using SSH client type: native
	I0814 17:57:25.530815   86299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:57:25.530838   86299 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0814 17:57:25.631733   86299 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0814 17:57:25.631838   86299 main.go:141] libmachine: found compatible host: buildroot
	I0814 17:57:25.631853   86299 main.go:141] libmachine: Provisioning with buildroot...
	I0814 17:57:25.631862   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetMachineName
	I0814 17:57:25.632114   86299 buildroot.go:166] provisioning hostname "newest-cni-471541"
	I0814 17:57:25.632140   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetMachineName
	I0814 17:57:25.632316   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:25.635248   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.635704   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.635747   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.635893   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:25.636105   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.636292   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.636429   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:25.636624   86299 main.go:141] libmachine: Using SSH client type: native
	I0814 17:57:25.636819   86299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:57:25.636833   86299 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-471541 && echo "newest-cni-471541" | sudo tee /etc/hostname
	I0814 17:57:25.753175   86299 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-471541
	
	I0814 17:57:25.753201   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:25.755722   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.756081   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.756110   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.756322   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:25.756495   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.756649   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:25.756752   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:25.756885   86299 main.go:141] libmachine: Using SSH client type: native
	I0814 17:57:25.757089   86299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:57:25.757124   86299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-471541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-471541/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-471541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:57:25.867757   86299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:57:25.867793   86299 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:57:25.867853   86299 buildroot.go:174] setting up certificates
	I0814 17:57:25.867872   86299 provision.go:84] configureAuth start
	I0814 17:57:25.867890   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetMachineName
	I0814 17:57:25.868202   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetIP
	I0814 17:57:25.870840   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.871196   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.871223   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.871364   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:25.873405   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.873732   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.873759   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.873928   86299 provision.go:143] copyHostCerts
	I0814 17:57:25.873996   86299 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:57:25.874010   86299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:57:25.874092   86299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:57:25.874181   86299 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:57:25.874189   86299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:57:25.874215   86299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:57:25.874281   86299 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:57:25.874290   86299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:57:25.874312   86299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:57:25.874379   86299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.newest-cni-471541 san=[127.0.0.1 192.168.72.111 localhost minikube newest-cni-471541]
	I0814 17:57:25.996425   86299 provision.go:177] copyRemoteCerts
	I0814 17:57:25.996483   86299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:57:25.996506   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:25.999060   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.999458   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:25.999485   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:25.999651   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:25.999848   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.000089   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:26.000226   86299 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:57:26.081077   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:57:26.106955   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:57:26.131893   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:57:26.156123   86299 provision.go:87] duration metric: took 288.234058ms to configureAuth
	I0814 17:57:26.156159   86299 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:57:26.156391   86299 config.go:182] Loaded profile config "newest-cni-471541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:57:26.156472   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:26.159434   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.159811   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.159861   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.160010   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:26.160224   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.160386   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.160557   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:26.160764   86299 main.go:141] libmachine: Using SSH client type: native
	I0814 17:57:26.161002   86299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:57:26.161029   86299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:57:26.420224   86299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:57:26.420256   86299 main.go:141] libmachine: Checking connection to Docker...
	I0814 17:57:26.420267   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetURL
	I0814 17:57:26.421520   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Using libvirt version 6000000
	I0814 17:57:26.424041   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.424331   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.424366   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.424536   86299 main.go:141] libmachine: Docker is up and running!
	I0814 17:57:26.424548   86299 main.go:141] libmachine: Reticulating splines...
	I0814 17:57:26.424554   86299 client.go:171] duration metric: took 20.677897664s to LocalClient.Create
	I0814 17:57:26.424576   86299 start.go:167] duration metric: took 20.677957595s to libmachine.API.Create "newest-cni-471541"
	I0814 17:57:26.424587   86299 start.go:293] postStartSetup for "newest-cni-471541" (driver="kvm2")
	I0814 17:57:26.424596   86299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:57:26.424608   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:26.424862   86299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:57:26.424891   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:26.427017   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.427490   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.427515   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.427708   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:26.427885   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.428041   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:26.428171   86299 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:57:26.509446   86299 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:57:26.513557   86299 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:57:26.513583   86299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:57:26.513651   86299 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:57:26.513748   86299 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:57:26.513844   86299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:57:26.526792   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:57:26.550156   86299 start.go:296] duration metric: took 125.558681ms for postStartSetup
	I0814 17:57:26.550202   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetConfigRaw
	I0814 17:57:26.550835   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetIP
	I0814 17:57:26.553916   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.554312   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.554345   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.554604   86299 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/config.json ...
	I0814 17:57:26.554798   86299 start.go:128] duration metric: took 20.826306791s to createHost
	I0814 17:57:26.554824   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:26.557132   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.557546   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.557588   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.557767   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:26.557942   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.558124   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.558268   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:26.558473   86299 main.go:141] libmachine: Using SSH client type: native
	I0814 17:57:26.558646   86299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0814 17:57:26.558665   86299 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:57:26.659772   86299 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723658246.633049223
	
	I0814 17:57:26.659793   86299 fix.go:216] guest clock: 1723658246.633049223
	I0814 17:57:26.659801   86299 fix.go:229] Guest: 2024-08-14 17:57:26.633049223 +0000 UTC Remote: 2024-08-14 17:57:26.554810264 +0000 UTC m=+20.939172484 (delta=78.238959ms)
	I0814 17:57:26.659830   86299 fix.go:200] guest clock delta is within tolerance: 78.238959ms
	I0814 17:57:26.659835   86299 start.go:83] releasing machines lock for "newest-cni-471541", held for 20.931408514s
	I0814 17:57:26.659854   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:26.660128   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetIP
	I0814 17:57:26.662819   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.663199   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.663220   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.663480   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:26.664005   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:26.664235   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:26.664382   86299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:57:26.664432   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:26.664453   86299 ssh_runner.go:195] Run: cat /version.json
	I0814 17:57:26.664475   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:26.667238   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.667497   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.667573   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.667602   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.667748   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:26.667942   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.667987   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:26.668016   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:26.668093   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:26.668196   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:26.668266   86299 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:57:26.668510   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:26.668722   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:26.668892   86299 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:57:26.743956   86299 ssh_runner.go:195] Run: systemctl --version
	I0814 17:57:26.782159   86299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:57:26.944504   86299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:57:26.950905   86299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:57:26.950960   86299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:57:26.966247   86299 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:57:26.966272   86299 start.go:495] detecting cgroup driver to use...
	I0814 17:57:26.966337   86299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:57:26.980854   86299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:57:26.993918   86299 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:57:26.993977   86299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:57:27.007239   86299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:57:27.020726   86299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:57:27.145122   86299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:57:27.287567   86299 docker.go:233] disabling docker service ...
	I0814 17:57:27.287640   86299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:57:27.305385   86299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:57:27.322269   86299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:57:27.464605   86299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:57:27.586777   86299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:57:27.600954   86299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:57:27.618663   86299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:57:27.618722   86299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.628397   86299 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:57:27.628486   86299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.638355   86299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.649135   86299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.659398   86299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:57:27.669485   86299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.679429   86299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.695959   86299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:57:27.705972   86299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:57:27.714686   86299 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:57:27.714750   86299 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:57:27.726487   86299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:57:27.735247   86299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:57:27.856059   86299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:57:27.992082   86299 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:57:27.992163   86299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:57:27.997308   86299 start.go:563] Will wait 60s for crictl version
	I0814 17:57:27.997357   86299 ssh_runner.go:195] Run: which crictl
	I0814 17:57:28.000861   86299 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:57:28.038826   86299 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:57:28.038906   86299 ssh_runner.go:195] Run: crio --version
	I0814 17:57:28.067028   86299 ssh_runner.go:195] Run: crio --version
	I0814 17:57:28.095352   86299 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:57:28.096650   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetIP
	I0814 17:57:28.099436   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:28.099778   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:28.099799   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:28.099986   86299 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:57:28.104054   86299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:57:28.117409   86299 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0814 17:57:28.118596   86299 kubeadm.go:883] updating cluster {Name:newest-cni-471541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:57:28.118731   86299 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:57:28.118804   86299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:57:28.150199   86299 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:57:28.150275   86299 ssh_runner.go:195] Run: which lz4
	I0814 17:57:28.154018   86299 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:57:28.157798   86299 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:57:28.157831   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:57:29.420372   86299 crio.go:462] duration metric: took 1.26637973s to copy over tarball
	I0814 17:57:29.420455   86299 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:57:31.480080   86299 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.059594465s)
	I0814 17:57:31.480116   86299 crio.go:469] duration metric: took 2.059711522s to extract the tarball
	I0814 17:57:31.480161   86299 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:57:31.518708   86299 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:57:31.564587   86299 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:57:31.564608   86299 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:57:31.564615   86299 kubeadm.go:934] updating node { 192.168.72.111 8443 v1.31.0 crio true true} ...
	I0814 17:57:31.564708   86299 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-471541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:57:31.564770   86299 ssh_runner.go:195] Run: crio config
	I0814 17:57:31.611368   86299 cni.go:84] Creating CNI manager for ""
	I0814 17:57:31.611386   86299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:57:31.611397   86299 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0814 17:57:31.611417   86299 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-471541 NodeName:newest-cni-471541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:57:31.611566   86299 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-471541"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:57:31.611626   86299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:57:31.620975   86299 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:57:31.621029   86299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:57:31.630694   86299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0814 17:57:31.647731   86299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:57:31.663961   86299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0814 17:57:31.679061   86299 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0814 17:57:31.682514   86299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:57:31.693658   86299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:57:31.815232   86299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:57:31.832616   86299 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541 for IP: 192.168.72.111
	I0814 17:57:31.832641   86299 certs.go:194] generating shared ca certs ...
	I0814 17:57:31.832657   86299 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:31.832804   86299 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:57:31.832846   86299 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:57:31.832856   86299 certs.go:256] generating profile certs ...
	I0814 17:57:31.832925   86299 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.key
	I0814 17:57:31.832939   86299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.crt with IP's: []
	I0814 17:57:32.014258   86299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.crt ...
	I0814 17:57:32.014289   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.crt: {Name:mk52b84d834b78123e55ca64dba1a8b4d8b898aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:32.014459   86299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.key ...
	I0814 17:57:32.014469   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/client.key: {Name:mk9567e6fc3d29715ca9a09dafb97350c0bceb29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:32.014549   86299 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key.5e517d6b
	I0814 17:57:32.014563   86299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt.5e517d6b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.111]
	I0814 17:57:32.164276   86299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt.5e517d6b ...
	I0814 17:57:32.164308   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt.5e517d6b: {Name:mkac8adafeddf6c4f1d680cb94be9d6c22597534 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:32.164472   86299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key.5e517d6b ...
	I0814 17:57:32.164485   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key.5e517d6b: {Name:mke9139d583f3caeb8974b7b3c201343ee74e43e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:32.164554   86299 certs.go:381] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt.5e517d6b -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt
	I0814 17:57:32.164664   86299 certs.go:385] copying /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key.5e517d6b -> /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key
	I0814 17:57:32.164719   86299 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.key
	I0814 17:57:32.164734   86299 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.crt with IP's: []
	I0814 17:57:32.231890   86299 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.crt ...
	I0814 17:57:32.231921   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.crt: {Name:mk2b2f1abb23d3529705151f176cdde77bf7fdac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:32.232077   86299 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.key ...
	I0814 17:57:32.232092   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.key: {Name:mk2d72284b2ac40d1f8ebb8a9d06c28bb6e57547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:32.232263   86299 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:57:32.232298   86299 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:57:32.232308   86299 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:57:32.232329   86299 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:57:32.232350   86299 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:57:32.232374   86299 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:57:32.232410   86299 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:57:32.232983   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:57:32.256499   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:57:32.277563   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:57:32.298928   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:57:32.321037   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 17:57:32.342399   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 17:57:32.363856   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:57:32.385891   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/newest-cni-471541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:57:32.408600   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:57:32.430317   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:57:32.453400   86299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:57:32.476713   86299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:57:32.492709   86299 ssh_runner.go:195] Run: openssl version
	I0814 17:57:32.498690   86299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:57:32.509183   86299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:57:32.513273   86299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:57:32.513316   86299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:57:32.519038   86299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:57:32.529124   86299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:57:32.539264   86299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:57:32.543407   86299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:57:32.543464   86299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:57:32.549026   86299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:57:32.559392   86299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:57:32.570264   86299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:57:32.575210   86299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:57:32.575267   86299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:57:32.581151   86299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:57:32.594134   86299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:57:32.600886   86299 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0814 17:57:32.600933   86299 kubeadm.go:392] StartCluster: {Name:newest-cni-471541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-471541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:57:32.601000   86299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:57:32.601060   86299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:57:32.657188   86299 cri.go:89] found id: ""
	I0814 17:57:32.657255   86299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:57:32.667184   86299 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:57:32.676720   86299 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:57:32.685231   86299 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:57:32.685249   86299 kubeadm.go:157] found existing configuration files:
	
	I0814 17:57:32.685290   86299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:57:32.694192   86299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:57:32.694272   86299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:57:32.703728   86299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:57:32.712273   86299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:57:32.712325   86299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:57:32.721370   86299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:57:32.730304   86299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:57:32.730397   86299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:57:32.739898   86299 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:57:32.748154   86299 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:57:32.748226   86299 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:57:32.757221   86299 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:57:32.855261   86299 kubeadm.go:310] W0814 17:57:32.836323     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:57:32.856181   86299 kubeadm.go:310] W0814 17:57:32.837336     844 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:57:32.956543   86299 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:57:43.407080   86299 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:57:43.407168   86299 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:57:43.407266   86299 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:57:43.407453   86299 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:57:43.407593   86299 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:57:43.407693   86299 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:57:43.410365   86299 out.go:204]   - Generating certificates and keys ...
	I0814 17:57:43.410456   86299 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:57:43.410542   86299 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:57:43.410641   86299 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0814 17:57:43.410698   86299 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0814 17:57:43.410786   86299 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0814 17:57:43.410870   86299 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0814 17:57:43.410950   86299 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0814 17:57:43.411115   86299 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-471541] and IPs [192.168.72.111 127.0.0.1 ::1]
	I0814 17:57:43.411189   86299 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0814 17:57:43.411315   86299 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-471541] and IPs [192.168.72.111 127.0.0.1 ::1]
	I0814 17:57:43.411447   86299 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0814 17:57:43.411563   86299 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0814 17:57:43.411617   86299 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0814 17:57:43.411687   86299 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:57:43.411764   86299 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:57:43.411863   86299 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:57:43.411937   86299 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:57:43.411998   86299 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:57:43.412049   86299 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:57:43.412134   86299 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:57:43.412231   86299 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:57:43.413378   86299 out.go:204]   - Booting up control plane ...
	I0814 17:57:43.413484   86299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:57:43.413602   86299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:57:43.413683   86299 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:57:43.413815   86299 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:57:43.414003   86299 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:57:43.414039   86299 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:57:43.414214   86299 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:57:43.414324   86299 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:57:43.414396   86299 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.4461ms
	I0814 17:57:43.414498   86299 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:57:43.414591   86299 kubeadm.go:310] [api-check] The API server is healthy after 6.001525382s
	I0814 17:57:43.414704   86299 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:57:43.414823   86299 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:57:43.414912   86299 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:57:43.415181   86299 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-471541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:57:43.415282   86299 kubeadm.go:310] [bootstrap-token] Using token: mnlq2m.zz0pj7oikraspg1j
	I0814 17:57:43.416874   86299 out.go:204]   - Configuring RBAC rules ...
	I0814 17:57:43.416992   86299 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:57:43.417088   86299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:57:43.417229   86299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:57:43.417385   86299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:57:43.417552   86299 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:57:43.417680   86299 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:57:43.417780   86299 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:57:43.417818   86299 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:57:43.417857   86299 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:57:43.417862   86299 kubeadm.go:310] 
	I0814 17:57:43.417910   86299 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:57:43.417915   86299 kubeadm.go:310] 
	I0814 17:57:43.418009   86299 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:57:43.418027   86299 kubeadm.go:310] 
	I0814 17:57:43.418067   86299 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:57:43.418154   86299 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:57:43.418223   86299 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:57:43.418232   86299 kubeadm.go:310] 
	I0814 17:57:43.418313   86299 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:57:43.418326   86299 kubeadm.go:310] 
	I0814 17:57:43.418398   86299 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:57:43.418406   86299 kubeadm.go:310] 
	I0814 17:57:43.418477   86299 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:57:43.418576   86299 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:57:43.418674   86299 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:57:43.418682   86299 kubeadm.go:310] 
	I0814 17:57:43.418780   86299 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:57:43.418878   86299 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:57:43.418888   86299 kubeadm.go:310] 
	I0814 17:57:43.418972   86299 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mnlq2m.zz0pj7oikraspg1j \
	I0814 17:57:43.419057   86299 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:57:43.419081   86299 kubeadm.go:310] 	--control-plane 
	I0814 17:57:43.419087   86299 kubeadm.go:310] 
	I0814 17:57:43.419175   86299 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:57:43.419184   86299 kubeadm.go:310] 
	I0814 17:57:43.419281   86299 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mnlq2m.zz0pj7oikraspg1j \
	I0814 17:57:43.419416   86299 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:57:43.419432   86299 cni.go:84] Creating CNI manager for ""
	I0814 17:57:43.419438   86299 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:57:43.421053   86299 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:57:43.422382   86299 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:57:43.434351   86299 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:57:43.455308   86299 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:57:43.455408   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:43.455463   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-471541 minikube.k8s.io/updated_at=2024_08_14T17_57_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=newest-cni-471541 minikube.k8s.io/primary=true
	I0814 17:57:43.488133   86299 ops.go:34] apiserver oom_adj: -16
	I0814 17:57:43.683870   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:44.183959   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:44.684056   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:45.184879   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:45.684953   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:46.184588   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:46.684165   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:47.183993   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:47.684695   86299 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:57:47.767751   86299 kubeadm.go:1113] duration metric: took 4.312422138s to wait for elevateKubeSystemPrivileges
	I0814 17:57:47.767777   86299 kubeadm.go:394] duration metric: took 15.166847499s to StartCluster
	I0814 17:57:47.767796   86299 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:47.767878   86299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:57:47.770091   86299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:57:47.770333   86299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0814 17:57:47.770361   86299 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:57:47.770456   86299 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:57:47.770535   86299 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-471541"
	I0814 17:57:47.770540   86299 config.go:182] Loaded profile config "newest-cni-471541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:57:47.770563   86299 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-471541"
	I0814 17:57:47.770552   86299 addons.go:69] Setting default-storageclass=true in profile "newest-cni-471541"
	I0814 17:57:47.770611   86299 host.go:66] Checking if "newest-cni-471541" exists ...
	I0814 17:57:47.770712   86299 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-471541"
	I0814 17:57:47.771044   86299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:57:47.771080   86299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:57:47.771165   86299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:57:47.771206   86299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:57:47.772982   86299 out.go:177] * Verifying Kubernetes components...
	I0814 17:57:47.774337   86299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:57:47.786878   86299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46607
	I0814 17:57:47.787267   86299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34143
	I0814 17:57:47.787406   86299 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:57:47.787727   86299 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:57:47.787963   86299 main.go:141] libmachine: Using API Version  1
	I0814 17:57:47.787982   86299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:57:47.788213   86299 main.go:141] libmachine: Using API Version  1
	I0814 17:57:47.788232   86299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:57:47.788310   86299 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:57:47.788572   86299 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:57:47.788734   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:57:47.788896   86299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:57:47.788940   86299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:57:47.792639   86299 addons.go:234] Setting addon default-storageclass=true in "newest-cni-471541"
	I0814 17:57:47.792673   86299 host.go:66] Checking if "newest-cni-471541" exists ...
	I0814 17:57:47.792979   86299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:57:47.793021   86299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:57:47.805578   86299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35489
	I0814 17:57:47.806106   86299 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:57:47.806668   86299 main.go:141] libmachine: Using API Version  1
	I0814 17:57:47.806696   86299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:57:47.807101   86299 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:57:47.807301   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:57:47.809164   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:47.809613   86299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43303
	I0814 17:57:47.810207   86299 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:57:47.810697   86299 main.go:141] libmachine: Using API Version  1
	I0814 17:57:47.810723   86299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:57:47.811033   86299 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:57:47.811220   86299 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:57:47.811589   86299 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:57:47.811622   86299 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:57:47.812922   86299 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:57:47.812940   86299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:57:47.812959   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:47.816598   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:47.817246   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:47.817284   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:47.817528   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:47.817723   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:47.817983   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:47.818133   86299 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:57:47.828348   86299 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45611
	I0814 17:57:47.828776   86299 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:57:47.829259   86299 main.go:141] libmachine: Using API Version  1
	I0814 17:57:47.829276   86299 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:57:47.829624   86299 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:57:47.829817   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetState
	I0814 17:57:47.831531   86299 main.go:141] libmachine: (newest-cni-471541) Calling .DriverName
	I0814 17:57:47.831875   86299 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:57:47.831895   86299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:57:47.831914   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHHostname
	I0814 17:57:47.834448   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:47.834823   86299 main.go:141] libmachine: (newest-cni-471541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:15:ce", ip: ""} in network mk-newest-cni-471541: {Iface:virbr4 ExpiryTime:2024-08-14 18:57:19 +0000 UTC Type:0 Mac:52:54:00:ae:15:ce Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:newest-cni-471541 Clientid:01:52:54:00:ae:15:ce}
	I0814 17:57:47.834862   86299 main.go:141] libmachine: (newest-cni-471541) DBG | domain newest-cni-471541 has defined IP address 192.168.72.111 and MAC address 52:54:00:ae:15:ce in network mk-newest-cni-471541
	I0814 17:57:47.834971   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHPort
	I0814 17:57:47.835159   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHKeyPath
	I0814 17:57:47.835285   86299 main.go:141] libmachine: (newest-cni-471541) Calling .GetSSHUsername
	I0814 17:57:47.835430   86299 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/newest-cni-471541/id_rsa Username:docker}
	I0814 17:57:48.041626   86299 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0814 17:57:48.070674   86299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:57:48.258913   86299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:57:48.335827   86299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:57:48.631875   86299 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0814 17:57:48.633914   86299 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:57:48.633975   86299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:57:48.838007   86299 main.go:141] libmachine: Making call to close driver server
	I0814 17:57:48.838035   86299 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:57:48.838433   86299 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:57:48.838462   86299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:57:48.838470   86299 main.go:141] libmachine: Making call to close driver server
	I0814 17:57:48.838479   86299 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:57:48.838436   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:57:48.838714   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:57:48.838754   86299 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:57:48.838763   86299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:57:48.862868   86299 main.go:141] libmachine: Making call to close driver server
	I0814 17:57:48.862893   86299 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:57:48.863203   86299 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:57:48.863226   86299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:57:48.863296   86299 main.go:141] libmachine: (newest-cni-471541) DBG | Closing plugin on server side
	I0814 17:57:49.139527   86299 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-471541" context rescaled to 1 replicas
	I0814 17:57:49.410997   86299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.075113426s)
	I0814 17:57:49.411050   86299 main.go:141] libmachine: Making call to close driver server
	I0814 17:57:49.411063   86299 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:57:49.411091   86299 api_server.go:72] duration metric: took 1.640695889s to wait for apiserver process to appear ...
	I0814 17:57:49.411118   86299 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:57:49.411140   86299 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8443/healthz ...
	I0814 17:57:49.411379   86299 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:57:49.411393   86299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:57:49.411401   86299 main.go:141] libmachine: Making call to close driver server
	I0814 17:57:49.411407   86299 main.go:141] libmachine: (newest-cni-471541) Calling .Close
	I0814 17:57:49.411627   86299 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:57:49.411642   86299 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:57:49.413818   86299 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0814 17:57:49.415351   86299 addons.go:510] duration metric: took 1.644910599s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0814 17:57:49.423444   86299 api_server.go:279] https://192.168.72.111:8443/healthz returned 200:
	ok
	I0814 17:57:49.429372   86299 api_server.go:141] control plane version: v1.31.0
	I0814 17:57:49.429398   86299 api_server.go:131] duration metric: took 18.273168ms to wait for apiserver health ...
	I0814 17:57:49.429407   86299 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:57:49.445146   86299 system_pods.go:59] 8 kube-system pods found
	I0814 17:57:49.445193   86299 system_pods.go:61] "coredns-6f6b679f8f-7mjxm" [2e18a55f-6371-4dae-98ae-96f35bd3e715] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:57:49.445206   86299 system_pods.go:61] "coredns-6f6b679f8f-qwgrb" [19a7dcc5-a7ef-4c1a-8d2b-f9fe4dcac290] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:57:49.445217   86299 system_pods.go:61] "etcd-newest-cni-471541" [b2a40767-5297-4676-b579-146172237eb4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:57:49.445230   86299 system_pods.go:61] "kube-apiserver-newest-cni-471541" [72c91661-d5b6-4b97-b8e4-811b7a8f6651] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:57:49.445239   86299 system_pods.go:61] "kube-controller-manager-newest-cni-471541" [148d4870-d2c0-438e-9b5c-85640f20db45] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:57:49.445250   86299 system_pods.go:61] "kube-proxy-smtcr" [63ede546-1b98-4f05-8500-8a35f2fe52ab] Running
	I0814 17:57:49.445259   86299 system_pods.go:61] "kube-scheduler-newest-cni-471541" [b3192192-0c5b-485c-acc7-b14d6b8e5baf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:57:49.445263   86299 system_pods.go:61] "storage-provisioner" [8b2208e6-577e-4f6d-90e3-2213b2bd5b7a] Pending
	I0814 17:57:49.445272   86299 system_pods.go:74] duration metric: took 15.858724ms to wait for pod list to return data ...
	I0814 17:57:49.445281   86299 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:57:49.456269   86299 default_sa.go:45] found service account: "default"
	I0814 17:57:49.456290   86299 default_sa.go:55] duration metric: took 11.003035ms for default service account to be created ...
	I0814 17:57:49.456301   86299 kubeadm.go:582] duration metric: took 1.685910971s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0814 17:57:49.456315   86299 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:57:49.461967   86299 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:57:49.461993   86299 node_conditions.go:123] node cpu capacity is 2
	I0814 17:57:49.462006   86299 node_conditions.go:105] duration metric: took 5.685538ms to run NodePressure ...
	I0814 17:57:49.462020   86299 start.go:241] waiting for startup goroutines ...
	I0814 17:57:49.462028   86299 start.go:246] waiting for cluster config update ...
	I0814 17:57:49.462041   86299 start.go:255] writing updated cluster config ...
	I0814 17:57:49.462353   86299 ssh_runner.go:195] Run: rm -f paused
	I0814 17:57:49.521476   86299 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:57:49.524006   86299 out.go:177] * Done! kubectl is now configured to use "newest-cni-471541" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.353495305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658274353387119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c383ae01-8293-43e8-b23b-25ed3fa427ac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.353996941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29bc4136-3615-4a4d-96ad-055a1372a184 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.354081607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29bc4136-3615-4a4d-96ad-055a1372a184 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.354394402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6411832275e2f94ebdb33c9b604c0362791bd2b6a2f6605f150a45653e325d4c,PodSandboxId:0d1171be4b2cdbe55c156b24a7b26d5e274d7315319fae670b86cfcf9865b035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657382263320602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc80ba99-eecf-4eb1-bd78-f88792cb3e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40074a3ac3bf30838f60f23a820c7f019349867b7cee0f905b6a5269f21d71,PodSandboxId:4df6341d4c94d9068260af133f0689b5adc0108677a2dd4bbdc216e3417c242a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381501668944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h4dmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f2fdca-15ba-430f-989f-3c569f33a76a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd85bbc0876fa7110310a46dc939feb47b1b471d7f091b294bdb265fe1f922b5,PodSandboxId:9a33b11104553d78ee84468c3fd39b6c21c397b9897af6afcf1a1e415ebcc3e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381268205264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b
0e3bf4-41d9-4151-8255-37881e596c20,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f86f7bd2800b70cb2d03417070b0d258c70f0a74abcf0ce14d441051eea33d8,PodSandboxId:2392f372cf1b920a66e520a8bc8efcc0eef2d04628c9149313392b98838ef050,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723657380705297314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6bps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9165c654-568f-4206-878c-f0c88ccd38cd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6471a23e249b3de7941e100ad508b6e0d1402f9cd161a4c799c6d899bfff010,PodSandboxId:6950ae89f5edc31e41d4d2c4c3cb1d74511ea7538e81269f451dea53148949b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657369806330384,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a248fb55c574b206d666259690ea8d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2db9a00effebd7f31ab18c8af6f07fbc41cdcc1ae3a4129284fb150cb914b5,PodSandboxId:67609ef7253a49b1ed4c8648d9599f4bca6bae2d483115669a443052e4ec8296,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657369820972
828,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d155167fb36f79ed629d90b68f623528,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8d7d31b1c602e5cc31a53745b8d294583ecfde3a12ac6d372c54d287bed915,PodSandboxId:505c3ee880b56b78659330f2def011258ae74c2008da0b590d72b28ad3865133,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657369809360353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6412917e9c19e52d0a896519458e8f07,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22898c56f39e5820c769ce0bf4038d54816b8f2cfe0a03e08482fd0311b34c02,PodSandboxId:b3fbe63d0b395e8ff81bf95aa50d953c6cd68f3b87439eeeaa3fe3b6109fa72e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657369736344265,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1eb47f90029ae493e6161685327809028a0363e9b595fca997396628067ba9,PodSandboxId:be5645e5ce93e1e6589d5d428d66361441b33cdea203ed9f3c8810db9262b676,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657089297749478,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29bc4136-3615-4a4d-96ad-055a1372a184 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.392790654Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d89fd1f-a5ab-4e6b-8f74-06c1c7ed03e6 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.392893334Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d89fd1f-a5ab-4e6b-8f74-06c1c7ed03e6 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.394156112Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6491466c-ff2d-43d5-a218-5851fd684101 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.394674552Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658274394648926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6491466c-ff2d-43d5-a218-5851fd684101 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.395146119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a36430eb-1efe-48fe-a56b-a6e53265941c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.395226543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a36430eb-1efe-48fe-a56b-a6e53265941c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.396050345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6411832275e2f94ebdb33c9b604c0362791bd2b6a2f6605f150a45653e325d4c,PodSandboxId:0d1171be4b2cdbe55c156b24a7b26d5e274d7315319fae670b86cfcf9865b035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657382263320602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc80ba99-eecf-4eb1-bd78-f88792cb3e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40074a3ac3bf30838f60f23a820c7f019349867b7cee0f905b6a5269f21d71,PodSandboxId:4df6341d4c94d9068260af133f0689b5adc0108677a2dd4bbdc216e3417c242a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381501668944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h4dmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f2fdca-15ba-430f-989f-3c569f33a76a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd85bbc0876fa7110310a46dc939feb47b1b471d7f091b294bdb265fe1f922b5,PodSandboxId:9a33b11104553d78ee84468c3fd39b6c21c397b9897af6afcf1a1e415ebcc3e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381268205264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b
0e3bf4-41d9-4151-8255-37881e596c20,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f86f7bd2800b70cb2d03417070b0d258c70f0a74abcf0ce14d441051eea33d8,PodSandboxId:2392f372cf1b920a66e520a8bc8efcc0eef2d04628c9149313392b98838ef050,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723657380705297314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6bps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9165c654-568f-4206-878c-f0c88ccd38cd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6471a23e249b3de7941e100ad508b6e0d1402f9cd161a4c799c6d899bfff010,PodSandboxId:6950ae89f5edc31e41d4d2c4c3cb1d74511ea7538e81269f451dea53148949b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657369806330384,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a248fb55c574b206d666259690ea8d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2db9a00effebd7f31ab18c8af6f07fbc41cdcc1ae3a4129284fb150cb914b5,PodSandboxId:67609ef7253a49b1ed4c8648d9599f4bca6bae2d483115669a443052e4ec8296,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657369820972
828,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d155167fb36f79ed629d90b68f623528,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8d7d31b1c602e5cc31a53745b8d294583ecfde3a12ac6d372c54d287bed915,PodSandboxId:505c3ee880b56b78659330f2def011258ae74c2008da0b590d72b28ad3865133,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657369809360353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6412917e9c19e52d0a896519458e8f07,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22898c56f39e5820c769ce0bf4038d54816b8f2cfe0a03e08482fd0311b34c02,PodSandboxId:b3fbe63d0b395e8ff81bf95aa50d953c6cd68f3b87439eeeaa3fe3b6109fa72e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657369736344265,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1eb47f90029ae493e6161685327809028a0363e9b595fca997396628067ba9,PodSandboxId:be5645e5ce93e1e6589d5d428d66361441b33cdea203ed9f3c8810db9262b676,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657089297749478,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a36430eb-1efe-48fe-a56b-a6e53265941c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.436452302Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3bde291-af25-4c26-8c94-5da4ed80bdb9 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.436533346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3bde291-af25-4c26-8c94-5da4ed80bdb9 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.437491182Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=908c39f9-4e17-43c1-af59-1e2de6945ecb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.437883734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658274437859429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=908c39f9-4e17-43c1-af59-1e2de6945ecb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.438600882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c758fab-6035-427b-b579-9cc1166df1c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.438669484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c758fab-6035-427b-b579-9cc1166df1c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.438880334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6411832275e2f94ebdb33c9b604c0362791bd2b6a2f6605f150a45653e325d4c,PodSandboxId:0d1171be4b2cdbe55c156b24a7b26d5e274d7315319fae670b86cfcf9865b035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657382263320602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc80ba99-eecf-4eb1-bd78-f88792cb3e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40074a3ac3bf30838f60f23a820c7f019349867b7cee0f905b6a5269f21d71,PodSandboxId:4df6341d4c94d9068260af133f0689b5adc0108677a2dd4bbdc216e3417c242a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381501668944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h4dmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f2fdca-15ba-430f-989f-3c569f33a76a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd85bbc0876fa7110310a46dc939feb47b1b471d7f091b294bdb265fe1f922b5,PodSandboxId:9a33b11104553d78ee84468c3fd39b6c21c397b9897af6afcf1a1e415ebcc3e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381268205264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b
0e3bf4-41d9-4151-8255-37881e596c20,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f86f7bd2800b70cb2d03417070b0d258c70f0a74abcf0ce14d441051eea33d8,PodSandboxId:2392f372cf1b920a66e520a8bc8efcc0eef2d04628c9149313392b98838ef050,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723657380705297314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6bps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9165c654-568f-4206-878c-f0c88ccd38cd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6471a23e249b3de7941e100ad508b6e0d1402f9cd161a4c799c6d899bfff010,PodSandboxId:6950ae89f5edc31e41d4d2c4c3cb1d74511ea7538e81269f451dea53148949b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657369806330384,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a248fb55c574b206d666259690ea8d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2db9a00effebd7f31ab18c8af6f07fbc41cdcc1ae3a4129284fb150cb914b5,PodSandboxId:67609ef7253a49b1ed4c8648d9599f4bca6bae2d483115669a443052e4ec8296,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657369820972
828,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d155167fb36f79ed629d90b68f623528,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8d7d31b1c602e5cc31a53745b8d294583ecfde3a12ac6d372c54d287bed915,PodSandboxId:505c3ee880b56b78659330f2def011258ae74c2008da0b590d72b28ad3865133,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657369809360353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6412917e9c19e52d0a896519458e8f07,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22898c56f39e5820c769ce0bf4038d54816b8f2cfe0a03e08482fd0311b34c02,PodSandboxId:b3fbe63d0b395e8ff81bf95aa50d953c6cd68f3b87439eeeaa3fe3b6109fa72e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657369736344265,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1eb47f90029ae493e6161685327809028a0363e9b595fca997396628067ba9,PodSandboxId:be5645e5ce93e1e6589d5d428d66361441b33cdea203ed9f3c8810db9262b676,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657089297749478,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c758fab-6035-427b-b579-9cc1166df1c6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.469703210Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ebf09d07-4261-4dde-987b-1d2f6a58aede name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.469780330Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ebf09d07-4261-4dde-987b-1d2f6a58aede name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.470865903Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c3f999c-1b28-477a-a02e-5a8df9d917ef name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.471183031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658274471162730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c3f999c-1b28-477a-a02e-5a8df9d917ef name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.471825938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=739b5029-27c9-479b-a4ca-d0bfc4c2cbe5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.471877840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=739b5029-27c9-479b-a4ca-d0bfc4c2cbe5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:54 no-preload-545149 crio[722]: time="2024-08-14 17:57:54.472068199Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6411832275e2f94ebdb33c9b604c0362791bd2b6a2f6605f150a45653e325d4c,PodSandboxId:0d1171be4b2cdbe55c156b24a7b26d5e274d7315319fae670b86cfcf9865b035,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723657382263320602,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc80ba99-eecf-4eb1-bd78-f88792cb3e5a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be40074a3ac3bf30838f60f23a820c7f019349867b7cee0f905b6a5269f21d71,PodSandboxId:4df6341d4c94d9068260af133f0689b5adc0108677a2dd4bbdc216e3417c242a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381501668944,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-h4dmc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33f2fdca-15ba-430f-989f-3c569f33a76a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd85bbc0876fa7110310a46dc939feb47b1b471d7f091b294bdb265fe1f922b5,PodSandboxId:9a33b11104553d78ee84468c3fd39b6c21c397b9897af6afcf1a1e415ebcc3e0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723657381268205264,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpfqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b
0e3bf4-41d9-4151-8255-37881e596c20,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f86f7bd2800b70cb2d03417070b0d258c70f0a74abcf0ce14d441051eea33d8,PodSandboxId:2392f372cf1b920a66e520a8bc8efcc0eef2d04628c9149313392b98838ef050,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:
1723657380705297314,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s6bps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9165c654-568f-4206-878c-f0c88ccd38cd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6471a23e249b3de7941e100ad508b6e0d1402f9cd161a4c799c6d899bfff010,PodSandboxId:6950ae89f5edc31e41d4d2c4c3cb1d74511ea7538e81269f451dea53148949b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723657369806330384,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00a248fb55c574b206d666259690ea8d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad2db9a00effebd7f31ab18c8af6f07fbc41cdcc1ae3a4129284fb150cb914b5,PodSandboxId:67609ef7253a49b1ed4c8648d9599f4bca6bae2d483115669a443052e4ec8296,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723657369820972
828,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d155167fb36f79ed629d90b68f623528,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8d7d31b1c602e5cc31a53745b8d294583ecfde3a12ac6d372c54d287bed915,PodSandboxId:505c3ee880b56b78659330f2def011258ae74c2008da0b590d72b28ad3865133,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723657369809360353,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6412917e9c19e52d0a896519458e8f07,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22898c56f39e5820c769ce0bf4038d54816b8f2cfe0a03e08482fd0311b34c02,PodSandboxId:b3fbe63d0b395e8ff81bf95aa50d953c6cd68f3b87439eeeaa3fe3b6109fa72e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723657369736344265,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1eb47f90029ae493e6161685327809028a0363e9b595fca997396628067ba9,PodSandboxId:be5645e5ce93e1e6589d5d428d66361441b33cdea203ed9f3c8810db9262b676,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723657089297749478,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-545149,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcf0ae35132362a5a7f1f7744a41f06a,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=739b5029-27c9-479b-a4ca-d0bfc4c2cbe5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6411832275e2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   0d1171be4b2cd       storage-provisioner
	be40074a3ac3b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   4df6341d4c94d       coredns-6f6b679f8f-h4dmc
	fd85bbc0876fa       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   9a33b11104553       coredns-6f6b679f8f-mpfqf
	6f86f7bd2800b       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   14 minutes ago      Running             kube-proxy                0                   2392f372cf1b9       kube-proxy-s6bps
	ad2db9a00effe       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   15 minutes ago      Running             kube-scheduler            2                   67609ef7253a4       kube-scheduler-no-preload-545149
	3a8d7d31b1c60       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   505c3ee880b56       etcd-no-preload-545149
	a6471a23e249b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   15 minutes ago      Running             kube-controller-manager   2                   6950ae89f5edc       kube-controller-manager-no-preload-545149
	22898c56f39e5       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   15 minutes ago      Running             kube-apiserver            2                   b3fbe63d0b395       kube-apiserver-no-preload-545149
	1c1eb47f90029       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   19 minutes ago      Exited              kube-apiserver            1                   be5645e5ce93e       kube-apiserver-no-preload-545149
	
	
	==> coredns [be40074a3ac3bf30838f60f23a820c7f019349867b7cee0f905b6a5269f21d71] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [fd85bbc0876fa7110310a46dc939feb47b1b471d7f091b294bdb265fe1f922b5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-545149
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-545149
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35
	                    minikube.k8s.io/name=no-preload-545149
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_14T17_42_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Aug 2024 17:42:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-545149
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Aug 2024 17:57:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Aug 2024 17:53:16 +0000   Wed, 14 Aug 2024 17:42:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Aug 2024 17:53:16 +0000   Wed, 14 Aug 2024 17:42:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Aug 2024 17:53:16 +0000   Wed, 14 Aug 2024 17:42:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Aug 2024 17:53:16 +0000   Wed, 14 Aug 2024 17:42:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    no-preload-545149
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7de90293c47344b9b9852d77ef42a8b0
	  System UUID:                7de90293-c473-44b9-b985-2d77ef42a8b0
	  Boot ID:                    2862b156-9a6e-4776-85d9-1339de7d8568
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-h4dmc                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-6f6b679f8f-mpfqf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-545149                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-545149             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-545149    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-s6bps                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-545149             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-6867b74b74-7qljd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-545149 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-545149 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-545149 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-545149 event: Registered Node no-preload-545149 in Controller
	
	
	==> dmesg <==
	[  +0.055340] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040430] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.008090] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.923686] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.542431] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.370494] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.062862] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054108] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.164053] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.145553] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.273958] systemd-fstab-generator[706]: Ignoring "noauto" option for root device
	[Aug14 17:38] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.062384] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.832381] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +5.593544] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.358416] kauditd_printk_skb: 85 callbacks suppressed
	[Aug14 17:42] kauditd_printk_skb: 3 callbacks suppressed
	[ +12.534779] systemd-fstab-generator[3083]: Ignoring "noauto" option for root device
	[  +4.634101] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.442861] systemd-fstab-generator[3407]: Ignoring "noauto" option for root device
	[Aug14 17:43] systemd-fstab-generator[3544]: Ignoring "noauto" option for root device
	[  +0.093487] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.574942] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [3a8d7d31b1c602e5cc31a53745b8d294583ecfde3a12ac6d372c54d287bed915] <==
	{"level":"info","ts":"2024-08-14T17:42:50.437685Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-14T17:42:50.437772Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.162:2379"}
	{"level":"info","ts":"2024-08-14T17:42:50.438508Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-14T17:42:50.438900Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da8895e0fc3a6493","local-member-id":"95e2e907d4f1ad16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:42:50.445924Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:42:50.445979Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-14T17:52:50.488544Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":685}
	{"level":"info","ts":"2024-08-14T17:52:50.497911Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":685,"took":"8.966334ms","hash":2757757880,"current-db-size-bytes":2228224,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2228224,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-14T17:52:50.497976Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2757757880,"revision":685,"compact-revision":-1}
	{"level":"warn","ts":"2024-08-14T17:57:34.571115Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.899456ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12472315999359212652 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.162\" mod_revision:1150 > success:<request_put:<key:\"/registry/masterleases/192.168.39.162\" value_size:67 lease:3248943962504436842 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.162\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-14T17:57:34.571691Z","caller":"traceutil/trace.go:171","msg":"trace[772564164] linearizableReadLoop","detail":"{readStateIndex:1350; appliedIndex:1349; }","duration":"373.774716ms","start":"2024-08-14T17:57:34.197882Z","end":"2024-08-14T17:57:34.571656Z","steps":["trace[772564164] 'read index received'  (duration: 114.082744ms)","trace[772564164] 'applied index is now lower than readState.Index'  (duration: 259.690469ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-14T17:57:34.571778Z","caller":"traceutil/trace.go:171","msg":"trace[785786095] transaction","detail":"{read_only:false; response_revision:1158; number_of_response:1; }","duration":"384.477019ms","start":"2024-08-14T17:57:34.187265Z","end":"2024-08-14T17:57:34.571742Z","steps":["trace[785786095] 'process raft request'  (duration: 124.7823ms)","trace[785786095] 'compare'  (duration: 257.673986ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-14T17:57:34.571888Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"373.99556ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T17:57:34.571965Z","caller":"traceutil/trace.go:171","msg":"trace[2045594400] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1158; }","duration":"374.075126ms","start":"2024-08-14T17:57:34.197877Z","end":"2024-08-14T17:57:34.571952Z","steps":["trace[2045594400] 'agreement among raft nodes before linearized reading'  (duration: 373.930492ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T17:57:34.572023Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T17:57:34.197842Z","time spent":"374.169706ms","remote":"127.0.0.1:58230","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-14T17:57:34.571899Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-14T17:57:34.187249Z","time spent":"384.594263ms","remote":"127.0.0.1:58280","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.39.162\" mod_revision:1150 > success:<request_put:<key:\"/registry/masterleases/192.168.39.162\" value_size:67 lease:3248943962504436842 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.162\" > >"}
	{"level":"warn","ts":"2024-08-14T17:57:34.571980Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.483571ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-14T17:57:34.572577Z","caller":"traceutil/trace.go:171","msg":"trace[1910425188] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:1158; }","duration":"206.078012ms","start":"2024-08-14T17:57:34.366487Z","end":"2024-08-14T17:57:34.572565Z","steps":["trace[1910425188] 'agreement among raft nodes before linearized reading'  (duration: 205.460571ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T17:57:34.946190Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.632407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-08-14T17:57:34.946746Z","caller":"traceutil/trace.go:171","msg":"trace[1885233937] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1158; }","duration":"165.192297ms","start":"2024-08-14T17:57:34.781536Z","end":"2024-08-14T17:57:34.946729Z","steps":["trace[1885233937] 'range keys from in-memory index tree'  (duration: 164.561313ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-14T17:57:40.300376Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.826779ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-14T17:57:40.300497Z","caller":"traceutil/trace.go:171","msg":"trace[520913144] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1162; }","duration":"101.97084ms","start":"2024-08-14T17:57:40.198511Z","end":"2024-08-14T17:57:40.300482Z","steps":["trace[520913144] 'range keys from in-memory index tree'  (duration: 101.701541ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-14T17:57:50.496889Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-08-14T17:57:50.501736Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":927,"took":"4.268354ms","hash":4044006680,"current-db-size-bytes":2228224,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-14T17:57:50.501860Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4044006680,"revision":927,"compact-revision":685}
	
	
	==> kernel <==
	 17:57:54 up 20 min,  0 users,  load average: 0.18, 0.14, 0.10
	Linux no-preload-545149 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1c1eb47f90029ae493e6161685327809028a0363e9b595fca997396628067ba9] <==
	W0814 17:42:44.846201       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:44.874223       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:44.984898       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:44.995734       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.003225       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.074308       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.100152       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.104838       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.119529       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.120770       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.129704       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.136209       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.140729       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.159735       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.161152       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.171644       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.200894       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.202293       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.215925       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.290906       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.300692       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.326268       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.340094       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.441140       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0814 17:42:45.518389       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [22898c56f39e5820c769ce0bf4038d54816b8f2cfe0a03e08482fd0311b34c02] <==
	I0814 17:53:53.422041       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:53:53.422091       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:55:53.423085       1 handler_proxy.go:99] no RequestInfo found in the context
	W0814 17:55:53.423115       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:55:53.423605       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0814 17:55:53.423626       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0814 17:55:53.425394       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:55:53.425443       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0814 17:57:52.426257       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:57:52.426472       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0814 17:57:53.428313       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:57:53.428459       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0814 17:57:53.428313       1 handler_proxy.go:99] no RequestInfo found in the context
	E0814 17:57:53.428637       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0814 17:57:53.430033       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0814 17:57:53.430113       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a6471a23e249b3de7941e100ad508b6e0d1402f9cd161a4c799c6d899bfff010] <==
	E0814 17:52:29.385043       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:52:29.941947       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:52:59.391962       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:52:59.951847       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:53:16.540921       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-545149"
	E0814 17:53:29.398370       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:53:29.959760       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:53:57.904217       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="109.834µs"
	E0814 17:53:59.405020       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:53:59.969010       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0814 17:54:11.905120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="176.577µs"
	E0814 17:54:29.412155       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:54:29.976973       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:54:59.419536       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:54:59.987857       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:55:29.425097       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:55:29.996706       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:55:59.431202       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:56:00.006557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:56:29.437937       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:56:30.016033       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:56:59.444787       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:57:00.023552       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0814 17:57:29.451035       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0814 17:57:30.035619       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6f86f7bd2800b70cb2d03417070b0d258c70f0a74abcf0ce14d441051eea33d8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0814 17:43:01.022071       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0814 17:43:01.040199       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.162"]
	E0814 17:43:01.040294       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0814 17:43:01.200564       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0814 17:43:01.200615       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0814 17:43:01.200647       1 server_linux.go:169] "Using iptables Proxier"
	I0814 17:43:01.203735       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0814 17:43:01.204021       1 server.go:483] "Version info" version="v1.31.0"
	I0814 17:43:01.204055       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0814 17:43:01.208212       1 config.go:197] "Starting service config controller"
	I0814 17:43:01.208294       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0814 17:43:01.208330       1 config.go:104] "Starting endpoint slice config controller"
	I0814 17:43:01.208353       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0814 17:43:01.216185       1 config.go:326] "Starting node config controller"
	I0814 17:43:01.216221       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0814 17:43:01.308583       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0814 17:43:01.308657       1 shared_informer.go:320] Caches are synced for service config
	I0814 17:43:01.339444       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ad2db9a00effebd7f31ab18c8af6f07fbc41cdcc1ae3a4129284fb150cb914b5] <==
	W0814 17:42:52.423914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 17:42:52.423937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:52.424259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 17:42:52.424369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.265591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0814 17:42:53.265667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.279471       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0814 17:42:53.279544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.375651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0814 17:42:53.375734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.451238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0814 17:42:53.451383       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.599186       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0814 17:42:53.599304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.617352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0814 17:42:53.617462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.631807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0814 17:42:53.632375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.657940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0814 17:42:53.657989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.658685       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0814 17:42:53.658724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0814 17:42:53.856662       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0814 17:42:53.856708       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0814 17:42:55.515479       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 14 17:56:54 no-preload-545149 kubelet[3414]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 17:56:55 no-preload-545149 kubelet[3414]: E0814 17:56:55.147538    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658215147178670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:56:55 no-preload-545149 kubelet[3414]: E0814 17:56:55.147611    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658215147178670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:56:56 no-preload-545149 kubelet[3414]: E0814 17:56:56.889158    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7qljd" podUID="0f0e5d07-eb28-46b3-9270-554006151eda"
	Aug 14 17:57:05 no-preload-545149 kubelet[3414]: E0814 17:57:05.151718    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658225151473157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:05 no-preload-545149 kubelet[3414]: E0814 17:57:05.151773    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658225151473157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:10 no-preload-545149 kubelet[3414]: E0814 17:57:10.888385    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7qljd" podUID="0f0e5d07-eb28-46b3-9270-554006151eda"
	Aug 14 17:57:15 no-preload-545149 kubelet[3414]: E0814 17:57:15.153077    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658235152545586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:15 no-preload-545149 kubelet[3414]: E0814 17:57:15.153115    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658235152545586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:25 no-preload-545149 kubelet[3414]: E0814 17:57:25.155808    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658245155160446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:25 no-preload-545149 kubelet[3414]: E0814 17:57:25.156767    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658245155160446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:25 no-preload-545149 kubelet[3414]: E0814 17:57:25.888379    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7qljd" podUID="0f0e5d07-eb28-46b3-9270-554006151eda"
	Aug 14 17:57:35 no-preload-545149 kubelet[3414]: E0814 17:57:35.157882    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658255157631570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:35 no-preload-545149 kubelet[3414]: E0814 17:57:35.157924    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658255157631570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:37 no-preload-545149 kubelet[3414]: E0814 17:57:37.888885    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7qljd" podUID="0f0e5d07-eb28-46b3-9270-554006151eda"
	Aug 14 17:57:45 no-preload-545149 kubelet[3414]: E0814 17:57:45.159816    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658265159377112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:45 no-preload-545149 kubelet[3414]: E0814 17:57:45.160068    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658265159377112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:52 no-preload-545149 kubelet[3414]: E0814 17:57:52.889003    3414 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7qljd" podUID="0f0e5d07-eb28-46b3-9270-554006151eda"
	Aug 14 17:57:54 no-preload-545149 kubelet[3414]: E0814 17:57:54.915650    3414 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 14 17:57:54 no-preload-545149 kubelet[3414]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 14 17:57:54 no-preload-545149 kubelet[3414]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 14 17:57:54 no-preload-545149 kubelet[3414]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 14 17:57:54 no-preload-545149 kubelet[3414]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 14 17:57:55 no-preload-545149 kubelet[3414]: E0814 17:57:55.161242    3414 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658275160930867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 14 17:57:55 no-preload-545149 kubelet[3414]: E0814 17:57:55.161280    3414 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658275160930867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6411832275e2f94ebdb33c9b604c0362791bd2b6a2f6605f150a45653e325d4c] <==
	I0814 17:43:02.376085       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0814 17:43:02.394234       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0814 17:43:02.394308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0814 17:43:02.410086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0814 17:43:02.411304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7670f961-0e1b-47fe-a4ba-c3344e080f56", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-545149_e685c9c5-9ca9-498b-ba4e-231abf101220 became leader
	I0814 17:43:02.411728       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-545149_e685c9c5-9ca9-498b-ba4e-231abf101220!
	I0814 17:43:02.512836       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-545149_e685c9c5-9ca9-498b-ba4e-231abf101220!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-545149 -n no-preload-545149
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-545149 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-7qljd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-545149 describe pod metrics-server-6867b74b74-7qljd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-545149 describe pod metrics-server-6867b74b74-7qljd: exit status 1 (65.201169ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-7qljd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-545149 describe pod metrics-server-6867b74b74-7qljd: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (340.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (137.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:54:58.429083   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:55:55.282853   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
E0814 17:56:25.080743   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.49:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.49:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505584 -n old-k8s-version-505584
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 2 (220.658001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-505584" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-505584 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-505584 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.208µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-505584 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 2 (217.718391ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-505584 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-505584 logs -n 25: (1.529484865s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-984053 sudo cat                              | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo                                  | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo find                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-984053 sudo crio                             | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-984053                                       | calico-984053                | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-005029 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:28 UTC |
	|         | disable-driver-mounts-005029                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:28 UTC | 14 Aug 24 17:30 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-545149             | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-309673            | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC | 14 Aug 24 17:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-885666  | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC | 14 Aug 24 17:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:30 UTC |                     |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-545149                  | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-505584        | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-545149                                   | no-preload-545149            | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC | 14 Aug 24 17:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-309673                 | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-309673                                  | embed-certs-309673           | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-885666       | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-885666 | jenkins | v1.33.1 | 14 Aug 24 17:32 UTC | 14 Aug 24 17:42 UTC |
	|         | default-k8s-diff-port-885666                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-505584             | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC | 14 Aug 24 17:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-505584                              | old-k8s-version-505584       | jenkins | v1.33.1 | 14 Aug 24 17:33 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 17:33:46
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 17:33:46.321266   80228 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:33:46.321519   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321529   80228 out.go:304] Setting ErrFile to fd 2...
	I0814 17:33:46.321533   80228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:33:46.321691   80228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:33:46.322185   80228 out.go:298] Setting JSON to false
	I0814 17:33:46.323102   80228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8170,"bootTime":1723648656,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:33:46.323161   80228 start.go:139] virtualization: kvm guest
	I0814 17:33:46.325361   80228 out.go:177] * [old-k8s-version-505584] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:33:46.326668   80228 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:33:46.326679   80228 notify.go:220] Checking for updates...
	I0814 17:33:46.329217   80228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:33:46.330813   80228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:33:46.332019   80228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:33:46.333264   80228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:33:46.334480   80228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:33:46.336108   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:33:46.336521   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.336564   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.351154   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0814 17:33:46.351563   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.352042   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.352061   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.352395   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.352567   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.354248   80228 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0814 17:33:46.355547   80228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:33:46.355834   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:33:46.355865   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:33:46.370976   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41681
	I0814 17:33:46.371452   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:33:46.371977   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:33:46.372008   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:33:46.372376   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:33:46.372624   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:33:46.407797   80228 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 17:33:46.408905   80228 start.go:297] selected driver: kvm2
	I0814 17:33:46.408918   80228 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.409022   80228 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:33:46.409677   80228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.409753   80228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 17:33:46.424801   80228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 17:33:46.425288   80228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:33:46.425338   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:33:46.425349   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:33:46.425396   80228 start.go:340] cluster config:
	{Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:33:46.425518   80228 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 17:33:46.427224   80228 out.go:177] * Starting "old-k8s-version-505584" primary control-plane node in "old-k8s-version-505584" cluster
	I0814 17:33:46.428485   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:33:46.428516   80228 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 17:33:46.428523   80228 cache.go:56] Caching tarball of preloaded images
	I0814 17:33:46.428589   80228 preload.go:172] Found /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0814 17:33:46.428600   80228 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0814 17:33:46.428727   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:33:46.428899   80228 start.go:360] acquireMachinesLock for old-k8s-version-505584: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:33:47.579625   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:50.651557   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:56.731587   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:33:59.803787   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:05.883582   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:08.959564   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:15.035593   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:18.107634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:24.187624   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:27.259634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:33.339631   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:36.411675   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:42.491633   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:45.563609   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:51.643582   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:34:54.715620   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:00.795564   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:03.867637   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:09.947634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:13.019646   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:19.099578   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:22.171640   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:28.251634   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:31.323645   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:37.403627   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:40.475635   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:46.555591   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:49.627635   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:55.707632   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:35:58.779532   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:04.859619   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:07.931632   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:14.011612   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:17.083624   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:23.163638   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:26.235638   79367 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I0814 17:36:29.240279   79521 start.go:364] duration metric: took 4m23.88398072s to acquireMachinesLock for "embed-certs-309673"
	I0814 17:36:29.240341   79521 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:36:29.240351   79521 fix.go:54] fixHost starting: 
	I0814 17:36:29.240703   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:36:29.240730   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:36:29.255901   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46483
	I0814 17:36:29.256372   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:36:29.256816   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:36:29.256839   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:36:29.257153   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:36:29.257337   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:29.257518   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:36:29.259382   79521 fix.go:112] recreateIfNeeded on embed-certs-309673: state=Stopped err=<nil>
	I0814 17:36:29.259419   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	W0814 17:36:29.259583   79521 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:36:29.261931   79521 out.go:177] * Restarting existing kvm2 VM for "embed-certs-309673" ...
	I0814 17:36:29.263301   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Start
	I0814 17:36:29.263487   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring networks are active...
	I0814 17:36:29.264251   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring network default is active
	I0814 17:36:29.264797   79521 main.go:141] libmachine: (embed-certs-309673) Ensuring network mk-embed-certs-309673 is active
	I0814 17:36:29.265331   79521 main.go:141] libmachine: (embed-certs-309673) Getting domain xml...
	I0814 17:36:29.266055   79521 main.go:141] libmachine: (embed-certs-309673) Creating domain...
	I0814 17:36:29.237663   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:36:29.237704   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:36:29.238088   79367 buildroot.go:166] provisioning hostname "no-preload-545149"
	I0814 17:36:29.238131   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:36:29.238337   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:36:29.240159   79367 machine.go:97] duration metric: took 4m37.421920583s to provisionDockerMachine
	I0814 17:36:29.240195   79367 fix.go:56] duration metric: took 4m37.443181113s for fixHost
	I0814 17:36:29.240202   79367 start.go:83] releasing machines lock for "no-preload-545149", held for 4m37.443414836s
	W0814 17:36:29.240223   79367 start.go:714] error starting host: provision: host is not running
	W0814 17:36:29.240348   79367 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0814 17:36:29.240358   79367 start.go:729] Will try again in 5 seconds ...
	I0814 17:36:30.482377   79521 main.go:141] libmachine: (embed-certs-309673) Waiting to get IP...
	I0814 17:36:30.483405   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:30.483750   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:30.483837   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:30.483729   80776 retry.go:31] will retry after 224.900105ms: waiting for machine to come up
	I0814 17:36:30.710259   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:30.710718   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:30.710748   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:30.710679   80776 retry.go:31] will retry after 322.892012ms: waiting for machine to come up
	I0814 17:36:31.035358   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.035807   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.035835   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.035757   80776 retry.go:31] will retry after 374.226901ms: waiting for machine to come up
	I0814 17:36:31.411228   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.411783   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.411813   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.411717   80776 retry.go:31] will retry after 472.149905ms: waiting for machine to come up
	I0814 17:36:31.885265   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:31.885787   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:31.885810   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:31.885757   80776 retry.go:31] will retry after 676.063343ms: waiting for machine to come up
	I0814 17:36:32.563206   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:32.563711   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:32.563745   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:32.563658   80776 retry.go:31] will retry after 904.634039ms: waiting for machine to come up
	I0814 17:36:33.469832   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:33.470255   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:33.470278   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:33.470206   80776 retry.go:31] will retry after 1.132974911s: waiting for machine to come up
	I0814 17:36:34.605040   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:34.605542   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:34.605576   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:34.605498   80776 retry.go:31] will retry after 1.210457498s: waiting for machine to come up
	I0814 17:36:34.242590   79367 start.go:360] acquireMachinesLock for no-preload-545149: {Name:mk61618450f33ce76e4843d7a1f08ede28bf5692 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0814 17:36:35.817809   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:35.818152   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:35.818177   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:35.818111   80776 retry.go:31] will retry after 1.275236618s: waiting for machine to come up
	I0814 17:36:37.095551   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:37.095975   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:37.096001   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:37.095937   80776 retry.go:31] will retry after 1.716925001s: waiting for machine to come up
	I0814 17:36:38.814927   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:38.815916   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:38.815943   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:38.815864   80776 retry.go:31] will retry after 2.040428036s: waiting for machine to come up
	I0814 17:36:40.858640   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:40.859157   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:40.859188   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:40.859108   80776 retry.go:31] will retry after 2.259949864s: waiting for machine to come up
	I0814 17:36:43.120436   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:43.120913   79521 main.go:141] libmachine: (embed-certs-309673) DBG | unable to find current IP address of domain embed-certs-309673 in network mk-embed-certs-309673
	I0814 17:36:43.120939   79521 main.go:141] libmachine: (embed-certs-309673) DBG | I0814 17:36:43.120879   80776 retry.go:31] will retry after 3.64334808s: waiting for machine to come up
	I0814 17:36:47.975977   79871 start.go:364] duration metric: took 3m52.18367446s to acquireMachinesLock for "default-k8s-diff-port-885666"
	I0814 17:36:47.976049   79871 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:36:47.976064   79871 fix.go:54] fixHost starting: 
	I0814 17:36:47.976457   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:36:47.976492   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:36:47.993513   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0814 17:36:47.993940   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:36:47.994480   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:36:47.994504   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:36:47.994815   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:36:47.995005   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:36:47.995181   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:36:47.996716   79871 fix.go:112] recreateIfNeeded on default-k8s-diff-port-885666: state=Stopped err=<nil>
	I0814 17:36:47.996755   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	W0814 17:36:47.996923   79871 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:36:47.998967   79871 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-885666" ...
	I0814 17:36:46.766908   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.767458   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has current primary IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.767500   79521 main.go:141] libmachine: (embed-certs-309673) Found IP for machine: 192.168.61.2
	I0814 17:36:46.767516   79521 main.go:141] libmachine: (embed-certs-309673) Reserving static IP address...
	I0814 17:36:46.767974   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "embed-certs-309673", mac: "52:54:00:ed:61:4e", ip: "192.168.61.2"} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.767993   79521 main.go:141] libmachine: (embed-certs-309673) Reserved static IP address: 192.168.61.2
	I0814 17:36:46.768006   79521 main.go:141] libmachine: (embed-certs-309673) DBG | skip adding static IP to network mk-embed-certs-309673 - found existing host DHCP lease matching {name: "embed-certs-309673", mac: "52:54:00:ed:61:4e", ip: "192.168.61.2"}
	I0814 17:36:46.768017   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Getting to WaitForSSH function...
	I0814 17:36:46.768023   79521 main.go:141] libmachine: (embed-certs-309673) Waiting for SSH to be available...
	I0814 17:36:46.770187   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.770517   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.770548   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.770612   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Using SSH client type: external
	I0814 17:36:46.770643   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa (-rw-------)
	I0814 17:36:46.770672   79521 main.go:141] libmachine: (embed-certs-309673) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:36:46.770697   79521 main.go:141] libmachine: (embed-certs-309673) DBG | About to run SSH command:
	I0814 17:36:46.770703   79521 main.go:141] libmachine: (embed-certs-309673) DBG | exit 0
	I0814 17:36:46.895078   79521 main.go:141] libmachine: (embed-certs-309673) DBG | SSH cmd err, output: <nil>: 
	I0814 17:36:46.895444   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetConfigRaw
	I0814 17:36:46.896033   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:46.898715   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.899085   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.899117   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.899434   79521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/config.json ...
	I0814 17:36:46.899701   79521 machine.go:94] provisionDockerMachine start ...
	I0814 17:36:46.899723   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:46.899906   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:46.901985   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.902244   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:46.902268   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:46.902398   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:46.902564   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:46.902707   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:46.902829   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:46.902966   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:46.903201   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:46.903213   79521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:36:47.007289   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:36:47.007313   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.007589   79521 buildroot.go:166] provisioning hostname "embed-certs-309673"
	I0814 17:36:47.007608   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.007802   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.010311   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.010631   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.010670   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.010805   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.010956   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.011067   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.011160   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.011269   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.011455   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.011467   79521 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-309673 && echo "embed-certs-309673" | sudo tee /etc/hostname
	I0814 17:36:47.128575   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-309673
	
	I0814 17:36:47.128601   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.131125   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.131464   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.131493   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.131655   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.131970   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.132146   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.132286   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.132457   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.132614   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.132630   79521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-309673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-309673/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-309673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:36:47.247426   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:36:47.247469   79521 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:36:47.247486   79521 buildroot.go:174] setting up certificates
	I0814 17:36:47.247496   79521 provision.go:84] configureAuth start
	I0814 17:36:47.247506   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetMachineName
	I0814 17:36:47.247768   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:47.250616   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.250993   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.251018   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.251148   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.253149   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.253436   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.253465   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.253551   79521 provision.go:143] copyHostCerts
	I0814 17:36:47.253616   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:36:47.253628   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:36:47.253703   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:36:47.253817   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:36:47.253835   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:36:47.253875   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:36:47.253952   79521 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:36:47.253962   79521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:36:47.253994   79521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:36:47.254060   79521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.embed-certs-309673 san=[127.0.0.1 192.168.61.2 embed-certs-309673 localhost minikube]
	I0814 17:36:47.338831   79521 provision.go:177] copyRemoteCerts
	I0814 17:36:47.338892   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:36:47.338921   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.341582   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.341897   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.341915   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.342053   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.342237   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.342374   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.342497   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.424777   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:36:47.446682   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:36:47.467672   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:36:47.488423   79521 provision.go:87] duration metric: took 240.914172ms to configureAuth
	I0814 17:36:47.488453   79521 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:36:47.488645   79521 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:36:47.488733   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.491453   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.491793   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.491816   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.492028   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.492216   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.492351   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.492479   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.492716   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.492909   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.492931   79521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:36:47.746210   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:36:47.746248   79521 machine.go:97] duration metric: took 846.530779ms to provisionDockerMachine
	I0814 17:36:47.746260   79521 start.go:293] postStartSetup for "embed-certs-309673" (driver="kvm2")
	I0814 17:36:47.746274   79521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:36:47.746297   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.746659   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:36:47.746694   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.749342   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.749674   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.749702   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.749831   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.750004   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.750126   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.750272   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.833279   79521 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:36:47.837076   79521 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:36:47.837099   79521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:36:47.837183   79521 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:36:47.837269   79521 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:36:47.837387   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:36:47.845640   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:36:47.866978   79521 start.go:296] duration metric: took 120.70557ms for postStartSetup
	I0814 17:36:47.867012   79521 fix.go:56] duration metric: took 18.626661733s for fixHost
	I0814 17:36:47.867030   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.869687   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.870016   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.870046   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.870220   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.870399   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.870660   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.870827   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.870999   79521 main.go:141] libmachine: Using SSH client type: native
	I0814 17:36:47.871209   79521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0814 17:36:47.871221   79521 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:36:47.975817   79521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657007.950271601
	
	I0814 17:36:47.975848   79521 fix.go:216] guest clock: 1723657007.950271601
	I0814 17:36:47.975860   79521 fix.go:229] Guest: 2024-08-14 17:36:47.950271601 +0000 UTC Remote: 2024-08-14 17:36:47.867016056 +0000 UTC m=+282.648397849 (delta=83.255545ms)
	I0814 17:36:47.975889   79521 fix.go:200] guest clock delta is within tolerance: 83.255545ms
	I0814 17:36:47.975896   79521 start.go:83] releasing machines lock for "embed-certs-309673", held for 18.735575335s
	I0814 17:36:47.975931   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.976213   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:47.978934   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.979457   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.979483   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.979625   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980134   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980303   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:36:47.980382   79521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:36:47.980428   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.980574   79521 ssh_runner.go:195] Run: cat /version.json
	I0814 17:36:47.980603   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:36:47.983247   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983557   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983649   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.983687   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.983828   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.984032   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.984042   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:47.984063   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:47.984183   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:36:47.984232   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.984320   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:36:47.984412   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:47.984467   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:36:47.984608   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:36:48.064891   79521 ssh_runner.go:195] Run: systemctl --version
	I0814 17:36:48.101403   79521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:36:48.239841   79521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:36:48.245634   79521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:36:48.245718   79521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:36:48.260517   79521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:36:48.260543   79521 start.go:495] detecting cgroup driver to use...
	I0814 17:36:48.260597   79521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:36:48.275003   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:36:48.290316   79521 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:36:48.290376   79521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:36:48.304351   79521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:36:48.320954   79521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:36:48.434176   79521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:36:48.582137   79521 docker.go:233] disabling docker service ...
	I0814 17:36:48.582217   79521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:36:48.595784   79521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:36:48.608379   79521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:36:48.735500   79521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:36:48.876194   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:36:48.891826   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:36:48.910820   79521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:36:48.910887   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.921125   79521 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:36:48.921198   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.931615   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.942779   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.953124   79521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:36:48.963454   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.974457   79521 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:48.991583   79521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:36:49.006059   79521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:36:49.015586   79521 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:36:49.015649   79521 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:36:49.028742   79521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:36:49.038126   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:36:49.155387   79521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:36:49.318598   79521 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:36:49.318679   79521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:36:49.323575   79521 start.go:563] Will wait 60s for crictl version
	I0814 17:36:49.323636   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:36:49.327233   79521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:36:49.369724   79521 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:36:49.369814   79521 ssh_runner.go:195] Run: crio --version
	I0814 17:36:49.399516   79521 ssh_runner.go:195] Run: crio --version
	I0814 17:36:49.431594   79521 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:36:49.432940   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetIP
	I0814 17:36:49.435776   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:49.436168   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:36:49.436199   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:36:49.436447   79521 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0814 17:36:49.440606   79521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:36:49.453159   79521 kubeadm.go:883] updating cluster {Name:embed-certs-309673 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:36:49.453272   79521 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:36:49.453311   79521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:36:49.486635   79521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:36:49.486708   79521 ssh_runner.go:195] Run: which lz4
	I0814 17:36:49.490626   79521 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:36:49.494822   79521 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:36:49.494852   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:36:48.000271   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Start
	I0814 17:36:48.000453   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring networks are active...
	I0814 17:36:48.001246   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring network default is active
	I0814 17:36:48.001621   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Ensuring network mk-default-k8s-diff-port-885666 is active
	I0814 17:36:48.002158   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Getting domain xml...
	I0814 17:36:48.002982   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Creating domain...
	I0814 17:36:49.272729   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting to get IP...
	I0814 17:36:49.273726   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.274182   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.274273   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.274157   80921 retry.go:31] will retry after 208.258845ms: waiting for machine to come up
	I0814 17:36:49.483781   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.484251   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.484278   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.484211   80921 retry.go:31] will retry after 318.193974ms: waiting for machine to come up
	I0814 17:36:49.803815   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.804311   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:49.804339   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:49.804277   80921 retry.go:31] will retry after 426.023242ms: waiting for machine to come up
	I0814 17:36:50.232060   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.232610   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.232646   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:50.232519   80921 retry.go:31] will retry after 534.392065ms: waiting for machine to come up
	I0814 17:36:50.745416   79521 crio.go:462] duration metric: took 1.254815826s to copy over tarball
	I0814 17:36:50.745515   79521 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:36:52.865848   79521 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.120299454s)
	I0814 17:36:52.865879   79521 crio.go:469] duration metric: took 2.120437156s to extract the tarball
	I0814 17:36:52.865887   79521 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:36:52.901808   79521 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:36:52.946366   79521 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:36:52.946386   79521 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:36:52.946394   79521 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.31.0 crio true true} ...
	I0814 17:36:52.946492   79521 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-309673 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:36:52.946556   79521 ssh_runner.go:195] Run: crio config
	I0814 17:36:52.992520   79521 cni.go:84] Creating CNI manager for ""
	I0814 17:36:52.992541   79521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:36:52.992553   79521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:36:52.992577   79521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-309673 NodeName:embed-certs-309673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:36:52.992740   79521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-309673"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:36:52.992811   79521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:36:53.002460   79521 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:36:53.002539   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:36:53.011167   79521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0814 17:36:53.026436   79521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:36:53.041728   79521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0814 17:36:53.059102   79521 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0814 17:36:53.062728   79521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:36:53.073803   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:36:53.200870   79521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:36:53.217448   79521 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673 for IP: 192.168.61.2
	I0814 17:36:53.217472   79521 certs.go:194] generating shared ca certs ...
	I0814 17:36:53.217495   79521 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:36:53.217694   79521 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:36:53.217755   79521 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:36:53.217766   79521 certs.go:256] generating profile certs ...
	I0814 17:36:53.217876   79521 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/client.key
	I0814 17:36:53.217961   79521 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.key.83510bb8
	I0814 17:36:53.218034   79521 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.key
	I0814 17:36:53.218202   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:36:53.218248   79521 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:36:53.218272   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:36:53.218309   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:36:53.218343   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:36:53.218380   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:36:53.218447   79521 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:36:53.219187   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:36:53.273437   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:36:53.307566   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:36:53.330107   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:36:53.360324   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0814 17:36:53.386974   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0814 17:36:53.409537   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:36:53.433873   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/embed-certs-309673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:36:53.456408   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:36:53.478233   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:36:53.500264   79521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:36:53.522440   79521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:36:53.538977   79521 ssh_runner.go:195] Run: openssl version
	I0814 17:36:53.544866   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:36:53.555085   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.559340   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.559399   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:36:53.565106   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:36:53.575561   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:36:53.585605   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.589838   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.589911   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:36:53.595165   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:36:53.604934   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:36:53.615153   79521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.619362   79521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.619435   79521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:36:53.624949   79521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:36:53.635459   79521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:36:53.639814   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:36:53.645419   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:36:53.651013   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:36:53.657004   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:36:53.662540   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:36:53.668187   79521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:36:53.673762   79521 kubeadm.go:392] StartCluster: {Name:embed-certs-309673 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-309673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:36:53.673867   79521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:36:53.673930   79521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:36:53.709404   79521 cri.go:89] found id: ""
	I0814 17:36:53.709490   79521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:36:53.719041   79521 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:36:53.719068   79521 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:36:53.719123   79521 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:36:53.728077   79521 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:36:53.729030   79521 kubeconfig.go:125] found "embed-certs-309673" server: "https://192.168.61.2:8443"
	I0814 17:36:53.730943   79521 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:36:53.739841   79521 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0814 17:36:53.739872   79521 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:36:53.739886   79521 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:36:53.739947   79521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:36:53.777400   79521 cri.go:89] found id: ""
	I0814 17:36:53.777476   79521 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:36:53.792838   79521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:36:53.802189   79521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:36:53.802223   79521 kubeadm.go:157] found existing configuration files:
	
	I0814 17:36:53.802278   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:36:53.813778   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:36:53.813854   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:36:53.825962   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:36:53.834929   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:36:53.834987   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:36:53.846315   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:36:53.855138   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:36:53.855206   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:36:53.864109   79521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:36:53.872613   79521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:36:53.872672   79521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:36:53.881307   79521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:36:53.890148   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.002103   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.664940   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.868608   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:54.932317   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:36:55.006430   79521 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:36:55.006523   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:50.768099   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.768599   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:50.768629   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:50.768554   80921 retry.go:31] will retry after 487.741283ms: waiting for machine to come up
	I0814 17:36:51.258499   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:51.259020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:51.259047   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:51.258975   80921 retry.go:31] will retry after 831.435484ms: waiting for machine to come up
	I0814 17:36:52.091900   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:52.092297   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:52.092351   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:52.092249   80921 retry.go:31] will retry after 1.067858402s: waiting for machine to come up
	I0814 17:36:53.161928   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:53.162393   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:53.162449   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:53.162366   80921 retry.go:31] will retry after 1.33971606s: waiting for machine to come up
	I0814 17:36:54.503810   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:54.504184   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:54.504214   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:54.504121   80921 retry.go:31] will retry after 1.4882184s: waiting for machine to come up
	I0814 17:36:55.506634   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:56.007367   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:56.507265   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:57.007343   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:36:57.026436   79521 api_server.go:72] duration metric: took 2.020005984s to wait for apiserver process to appear ...
	I0814 17:36:57.026471   79521 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:36:57.026496   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:36:55.994824   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:55.995255   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:55.995283   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:55.995206   80921 retry.go:31] will retry after 1.65461779s: waiting for machine to come up
	I0814 17:36:57.651449   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:36:57.651837   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:36:57.651867   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:36:57.651794   80921 retry.go:31] will retry after 2.38071296s: waiting for machine to come up
	I0814 17:37:00.033719   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:00.034261   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:37:00.034290   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:37:00.034204   80921 retry.go:31] will retry after 3.476533232s: waiting for machine to come up
	I0814 17:37:00.329636   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:00.329674   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:00.329689   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:00.357287   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:00.357334   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:00.527150   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:00.536020   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:00.536058   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:01.026558   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:01.034241   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:01.034271   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:01.526814   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:01.536226   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:01.536267   79521 api_server.go:103] status: https://192.168.61.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:02.026791   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:37:02.031068   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 200:
	ok
	I0814 17:37:02.037240   79521 api_server.go:141] control plane version: v1.31.0
	I0814 17:37:02.037266   79521 api_server.go:131] duration metric: took 5.010786446s to wait for apiserver health ...
	I0814 17:37:02.037278   79521 cni.go:84] Creating CNI manager for ""
	I0814 17:37:02.037286   79521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:02.039248   79521 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:37:02.040543   79521 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:37:02.050754   79521 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:37:02.067333   79521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:37:02.076082   79521 system_pods.go:59] 8 kube-system pods found
	I0814 17:37:02.076115   79521 system_pods.go:61] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:37:02.076130   79521 system_pods.go:61] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:37:02.076139   79521 system_pods.go:61] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:37:02.076178   79521 system_pods.go:61] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:37:02.076198   79521 system_pods.go:61] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:37:02.076218   79521 system_pods.go:61] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:37:02.076233   79521 system_pods.go:61] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:37:02.076242   79521 system_pods.go:61] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:37:02.076253   79521 system_pods.go:74] duration metric: took 8.901356ms to wait for pod list to return data ...
	I0814 17:37:02.076266   79521 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:37:02.080064   79521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:37:02.080087   79521 node_conditions.go:123] node cpu capacity is 2
	I0814 17:37:02.080101   79521 node_conditions.go:105] duration metric: took 3.829329ms to run NodePressure ...
	I0814 17:37:02.080121   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:02.359163   79521 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:37:02.368689   79521 kubeadm.go:739] kubelet initialised
	I0814 17:37:02.368717   79521 kubeadm.go:740] duration metric: took 9.524301ms waiting for restarted kubelet to initialise ...
	I0814 17:37:02.368728   79521 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:02.376056   79521 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.381317   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.381347   79521 pod_ready.go:81] duration metric: took 5.262062ms for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.381359   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.381370   79521 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.386799   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "etcd-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.386822   79521 pod_ready.go:81] duration metric: took 5.440585ms for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.386832   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "etcd-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.386838   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.392829   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.392853   79521 pod_ready.go:81] duration metric: took 6.003762ms for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.392864   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.392874   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.470943   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.470975   79521 pod_ready.go:81] duration metric: took 78.089715ms for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.470984   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.470996   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:02.870134   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-proxy-z8x9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.870163   79521 pod_ready.go:81] duration metric: took 399.157385ms for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:02.870175   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-proxy-z8x9t" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:02.870183   79521 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:03.270805   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.270837   79521 pod_ready.go:81] duration metric: took 400.647029ms for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:03.270848   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.270856   79521 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:03.671023   79521 pod_ready.go:97] node "embed-certs-309673" hosting pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.671058   79521 pod_ready.go:81] duration metric: took 400.191147ms for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:03.671070   79521 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-309673" hosting pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:03.671079   79521 pod_ready.go:38] duration metric: took 1.302340033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:03.671098   79521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:37:03.683676   79521 ops.go:34] apiserver oom_adj: -16
	I0814 17:37:03.683701   79521 kubeadm.go:597] duration metric: took 9.964625256s to restartPrimaryControlPlane
	I0814 17:37:03.683712   79521 kubeadm.go:394] duration metric: took 10.009956133s to StartCluster
	I0814 17:37:03.683729   79521 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:03.683809   79521 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:03.685474   79521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:03.685708   79521 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:37:03.685766   79521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:37:03.685850   79521 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-309673"
	I0814 17:37:03.685862   79521 addons.go:69] Setting default-storageclass=true in profile "embed-certs-309673"
	I0814 17:37:03.685900   79521 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-309673"
	I0814 17:37:03.685907   79521 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-309673"
	W0814 17:37:03.685911   79521 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:37:03.685933   79521 config.go:182] Loaded profile config "embed-certs-309673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:03.685933   79521 addons.go:69] Setting metrics-server=true in profile "embed-certs-309673"
	I0814 17:37:03.685988   79521 addons.go:234] Setting addon metrics-server=true in "embed-certs-309673"
	W0814 17:37:03.686006   79521 addons.go:243] addon metrics-server should already be in state true
	I0814 17:37:03.685945   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.686076   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.686284   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686362   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.686391   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686422   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.686482   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.686538   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.687598   79521 out.go:177] * Verifying Kubernetes components...
	I0814 17:37:03.688995   79521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:03.701610   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32985
	I0814 17:37:03.702174   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.702789   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.702817   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.703223   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.703682   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.704077   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0814 17:37:03.704508   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.704864   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34649
	I0814 17:37:03.705141   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.705154   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.705224   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.705473   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.705656   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.705670   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.706806   79521 addons.go:234] Setting addon default-storageclass=true in "embed-certs-309673"
	W0814 17:37:03.706824   79521 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:37:03.706851   79521 host.go:66] Checking if "embed-certs-309673" exists ...
	I0814 17:37:03.707093   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.707112   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.707420   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.707536   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.707584   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.708017   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.708079   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.722383   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41485
	I0814 17:37:03.722779   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.723288   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.723307   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.728799   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38405
	I0814 17:37:03.728839   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38781
	I0814 17:37:03.728928   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.729426   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.729495   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.729776   79521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:03.729809   79521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:03.729951   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.729951   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.729967   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.729973   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.730360   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.730371   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.730698   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.730749   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.732979   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.733596   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.735250   79521 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:03.735262   79521 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:37:03.736576   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:37:03.736593   79521 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:37:03.736607   79521 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:37:03.736612   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.736620   79521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:37:03.736637   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.740008   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740123   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740491   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.740558   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740676   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.740819   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.740842   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.740872   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.740994   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.741120   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.741160   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.741527   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.741692   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.741817   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.749144   79521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34227
	I0814 17:37:03.749482   79521 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:03.749914   79521 main.go:141] libmachine: Using API Version  1
	I0814 17:37:03.749929   79521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:03.750267   79521 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:03.750467   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetState
	I0814 17:37:03.752107   79521 main.go:141] libmachine: (embed-certs-309673) Calling .DriverName
	I0814 17:37:03.752325   79521 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:37:03.752339   79521 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:37:03.752360   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHHostname
	I0814 17:37:03.754559   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.754845   79521 main.go:141] libmachine: (embed-certs-309673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:61:4e", ip: ""} in network mk-embed-certs-309673: {Iface:virbr2 ExpiryTime:2024-08-14 18:36:39 +0000 UTC Type:0 Mac:52:54:00:ed:61:4e Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:embed-certs-309673 Clientid:01:52:54:00:ed:61:4e}
	I0814 17:37:03.754859   79521 main.go:141] libmachine: (embed-certs-309673) DBG | domain embed-certs-309673 has defined IP address 192.168.61.2 and MAC address 52:54:00:ed:61:4e in network mk-embed-certs-309673
	I0814 17:37:03.755073   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHPort
	I0814 17:37:03.755247   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHKeyPath
	I0814 17:37:03.755402   79521 main.go:141] libmachine: (embed-certs-309673) Calling .GetSSHUsername
	I0814 17:37:03.755529   79521 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/embed-certs-309673/id_rsa Username:docker}
	I0814 17:37:03.877535   79521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:03.897022   79521 node_ready.go:35] waiting up to 6m0s for node "embed-certs-309673" to be "Ready" ...
	I0814 17:37:03.951512   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:37:03.988066   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:37:03.988085   79521 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:37:04.014925   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:37:04.025506   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:37:04.025531   79521 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:37:04.072457   79521 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:37:04.072480   79521 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:37:04.104804   79521 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:37:05.067867   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.116315804s)
	I0814 17:37:05.067888   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.052939793s)
	I0814 17:37:05.067925   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.067935   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068000   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068023   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068241   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068322   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068336   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068345   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068364   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068454   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068485   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068497   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068505   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.068518   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.068795   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068815   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068823   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.068830   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.068872   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.068905   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.087716   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.087746   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.088086   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.088106   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.113388   79521 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.008529856s)
	I0814 17:37:05.113441   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.113458   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.113736   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.113787   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.113800   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.113812   79521 main.go:141] libmachine: Making call to close driver server
	I0814 17:37:05.113823   79521 main.go:141] libmachine: (embed-certs-309673) Calling .Close
	I0814 17:37:05.114057   79521 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:37:05.114071   79521 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:37:05.114081   79521 addons.go:475] Verifying addon metrics-server=true in "embed-certs-309673"
	I0814 17:37:05.114163   79521 main.go:141] libmachine: (embed-certs-309673) DBG | Closing plugin on server side
	I0814 17:37:05.116443   79521 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 17:37:05.118087   79521 addons.go:510] duration metric: took 1.432323959s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 17:37:03.512364   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:03.512842   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | unable to find current IP address of domain default-k8s-diff-port-885666 in network mk-default-k8s-diff-port-885666
	I0814 17:37:03.512880   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | I0814 17:37:03.512785   80921 retry.go:31] will retry after 4.358649621s: waiting for machine to come up
	I0814 17:37:09.324026   80228 start.go:364] duration metric: took 3m22.895078586s to acquireMachinesLock for "old-k8s-version-505584"
	I0814 17:37:09.324085   80228 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:09.324101   80228 fix.go:54] fixHost starting: 
	I0814 17:37:09.324533   80228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:09.324575   80228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:09.344085   80228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43749
	I0814 17:37:09.344490   80228 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:09.344980   80228 main.go:141] libmachine: Using API Version  1
	I0814 17:37:09.345006   80228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:09.345416   80228 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:09.345674   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:09.345842   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetState
	I0814 17:37:09.347489   80228 fix.go:112] recreateIfNeeded on old-k8s-version-505584: state=Stopped err=<nil>
	I0814 17:37:09.347511   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	W0814 17:37:09.347696   80228 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:09.349747   80228 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-505584" ...
	I0814 17:37:05.901013   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:07.901054   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:07.876377   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.876820   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has current primary IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.876845   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Found IP for machine: 192.168.50.184
	I0814 17:37:07.876857   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Reserving static IP address...
	I0814 17:37:07.877281   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-885666", mac: "52:54:00:f8:cc:3c", ip: "192.168.50.184"} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:07.877300   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Reserved static IP address: 192.168.50.184
	I0814 17:37:07.877320   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | skip adding static IP to network mk-default-k8s-diff-port-885666 - found existing host DHCP lease matching {name: "default-k8s-diff-port-885666", mac: "52:54:00:f8:cc:3c", ip: "192.168.50.184"}
	I0814 17:37:07.877339   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Getting to WaitForSSH function...
	I0814 17:37:07.877355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Waiting for SSH to be available...
	I0814 17:37:07.879843   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.880200   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:07.880242   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:07.880419   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Using SSH client type: external
	I0814 17:37:07.880445   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa (-rw-------)
	I0814 17:37:07.880496   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.184 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:07.880517   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | About to run SSH command:
	I0814 17:37:07.880549   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | exit 0
	I0814 17:37:08.007553   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:08.007929   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetConfigRaw
	I0814 17:37:08.009171   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:08.012358   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.012772   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.012804   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.013076   79871 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/config.json ...
	I0814 17:37:08.013284   79871 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:08.013310   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:08.013579   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.015965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.016325   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.016363   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.016491   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.016680   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.016873   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.017004   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.017140   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.017354   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.017376   79871 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:08.132369   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:08.132404   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.132657   79871 buildroot.go:166] provisioning hostname "default-k8s-diff-port-885666"
	I0814 17:37:08.132695   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.132906   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.136230   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.136669   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.136696   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.136937   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.137163   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.137350   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.137500   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.137672   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.137878   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.137900   79871 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-885666 && echo "default-k8s-diff-port-885666" | sudo tee /etc/hostname
	I0814 17:37:08.273593   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-885666
	
	I0814 17:37:08.273626   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.276470   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.276830   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.276862   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.277137   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.277382   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.277547   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.277713   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.277855   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.278052   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.278072   79871 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-885666' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-885666/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-885666' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:08.401522   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:08.401556   79871 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:08.401602   79871 buildroot.go:174] setting up certificates
	I0814 17:37:08.401626   79871 provision.go:84] configureAuth start
	I0814 17:37:08.401650   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetMachineName
	I0814 17:37:08.401963   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:08.404855   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.405251   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.405285   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.405521   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.407826   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.408338   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.408371   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.408515   79871 provision.go:143] copyHostCerts
	I0814 17:37:08.408583   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:08.408597   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:08.408681   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:08.408812   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:08.408823   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:08.408861   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:08.408947   79871 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:08.408956   79871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:08.408984   79871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:08.409064   79871 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-885666 san=[127.0.0.1 192.168.50.184 default-k8s-diff-port-885666 localhost minikube]
	I0814 17:37:08.613459   79871 provision.go:177] copyRemoteCerts
	I0814 17:37:08.613530   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:08.613575   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.616704   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.617044   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.617072   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.617324   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.617515   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.617698   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.617844   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:08.705505   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:08.728835   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0814 17:37:08.751995   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:08.774577   79871 provision.go:87] duration metric: took 372.933752ms to configureAuth
	I0814 17:37:08.774609   79871 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:08.774812   79871 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:08.774880   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:08.777840   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.778235   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:08.778260   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:08.778527   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:08.778752   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.778899   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:08.779020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:08.779162   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:08.779437   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:08.779458   79871 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:09.055900   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:09.055927   79871 machine.go:97] duration metric: took 1.04262996s to provisionDockerMachine
	I0814 17:37:09.055943   79871 start.go:293] postStartSetup for "default-k8s-diff-port-885666" (driver="kvm2")
	I0814 17:37:09.055957   79871 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:09.055982   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.056325   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:09.056355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.059396   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.059853   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.059888   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.060064   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.060280   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.060558   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.060745   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.150649   79871 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:09.155263   79871 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:09.155295   79871 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:09.155400   79871 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:09.155500   79871 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:09.155623   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:09.167051   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:09.197223   79871 start.go:296] duration metric: took 141.264897ms for postStartSetup
	I0814 17:37:09.197324   79871 fix.go:56] duration metric: took 21.221265818s for fixHost
	I0814 17:37:09.197356   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.201388   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.201965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.202011   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.202109   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.202354   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.202569   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.202800   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.203010   79871 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:09.203196   79871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.184 22 <nil> <nil>}
	I0814 17:37:09.203209   79871 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:09.323868   79871 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657029.302975780
	
	I0814 17:37:09.323892   79871 fix.go:216] guest clock: 1723657029.302975780
	I0814 17:37:09.323900   79871 fix.go:229] Guest: 2024-08-14 17:37:09.30297578 +0000 UTC Remote: 2024-08-14 17:37:09.197335302 +0000 UTC m=+253.546385360 (delta=105.640478ms)
	I0814 17:37:09.323918   79871 fix.go:200] guest clock delta is within tolerance: 105.640478ms
	I0814 17:37:09.323923   79871 start.go:83] releasing machines lock for "default-k8s-diff-port-885666", held for 21.347903434s
	I0814 17:37:09.323948   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.324209   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:09.327260   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.327802   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.327833   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.327993   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328500   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328727   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:37:09.328814   79871 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:09.328862   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.328955   79871 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:09.328972   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:37:09.331813   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332081   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332233   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.332274   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332365   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.332490   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:09.332512   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:09.332555   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.332669   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:37:09.332761   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.332824   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:37:09.332882   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.332926   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:37:09.333021   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:37:09.416041   79871 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:09.456024   79871 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:09.604623   79871 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:09.610562   79871 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:09.610624   79871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:09.627298   79871 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:09.627344   79871 start.go:495] detecting cgroup driver to use...
	I0814 17:37:09.627418   79871 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:09.648212   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:09.666047   79871 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:09.666107   79871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:09.681875   79871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:09.695920   79871 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:09.824502   79871 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:09.979561   79871 docker.go:233] disabling docker service ...
	I0814 17:37:09.979658   79871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:09.996877   79871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:10.014264   79871 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:10.166653   79871 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:10.288261   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:10.301868   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:10.320716   79871 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:37:10.320788   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.331099   79871 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:10.331158   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.342841   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.353762   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.364604   79871 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:10.376521   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.386787   79871 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.406713   79871 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:10.418047   79871 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:10.428368   79871 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:10.428433   79871 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:10.442759   79871 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:10.452993   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:10.563097   79871 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:10.716953   79871 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:10.717031   79871 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:10.722685   79871 start.go:563] Will wait 60s for crictl version
	I0814 17:37:10.722759   79871 ssh_runner.go:195] Run: which crictl
	I0814 17:37:10.726621   79871 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:10.764534   79871 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:10.764628   79871 ssh_runner.go:195] Run: crio --version
	I0814 17:37:10.791513   79871 ssh_runner.go:195] Run: crio --version
	I0814 17:37:10.822380   79871 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:37:09.351136   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .Start
	I0814 17:37:09.351338   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring networks are active...
	I0814 17:37:09.352075   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network default is active
	I0814 17:37:09.352333   80228 main.go:141] libmachine: (old-k8s-version-505584) Ensuring network mk-old-k8s-version-505584 is active
	I0814 17:37:09.352701   80228 main.go:141] libmachine: (old-k8s-version-505584) Getting domain xml...
	I0814 17:37:09.353363   80228 main.go:141] libmachine: (old-k8s-version-505584) Creating domain...
	I0814 17:37:10.664390   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting to get IP...
	I0814 17:37:10.665484   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.665915   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.665980   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.665888   81116 retry.go:31] will retry after 285.047327ms: waiting for machine to come up
	I0814 17:37:10.952552   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:10.953009   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:10.953036   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:10.952973   81116 retry.go:31] will retry after 281.728141ms: waiting for machine to come up
	I0814 17:37:11.236576   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.237153   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.237192   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.237079   81116 retry.go:31] will retry after 341.673836ms: waiting for machine to come up
	I0814 17:37:10.401790   79521 node_ready.go:53] node "embed-certs-309673" has status "Ready":"False"
	I0814 17:37:11.400713   79521 node_ready.go:49] node "embed-certs-309673" has status "Ready":"True"
	I0814 17:37:11.400742   79521 node_ready.go:38] duration metric: took 7.503686271s for node "embed-certs-309673" to be "Ready" ...
	I0814 17:37:11.400755   79521 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:11.408217   79521 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:11.414215   79521 pod_ready.go:92] pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:11.414244   79521 pod_ready.go:81] duration metric: took 5.997997ms for pod "coredns-6f6b679f8f-kccp8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:11.414256   79521 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:13.420804   79521 pod_ready.go:102] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:10.824020   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetIP
	I0814 17:37:10.827965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:10.828426   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:37:10.828465   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:37:10.828807   79871 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:10.833261   79871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:10.846928   79871 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-885666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:10.847080   79871 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:37:10.847142   79871 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:10.889355   79871 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:37:10.889453   79871 ssh_runner.go:195] Run: which lz4
	I0814 17:37:10.894405   79871 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:37:10.898992   79871 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:10.899029   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0814 17:37:12.155402   79871 crio.go:462] duration metric: took 1.261016682s to copy over tarball
	I0814 17:37:12.155485   79871 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:14.344118   79871 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18859644s)
	I0814 17:37:14.344162   79871 crio.go:469] duration metric: took 2.188726026s to extract the tarball
	I0814 17:37:14.344173   79871 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:14.380317   79871 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:14.428289   79871 crio.go:514] all images are preloaded for cri-o runtime.
	I0814 17:37:14.428312   79871 cache_images.go:84] Images are preloaded, skipping loading
	I0814 17:37:14.428326   79871 kubeadm.go:934] updating node { 192.168.50.184 8444 v1.31.0 crio true true} ...
	I0814 17:37:14.428422   79871 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-885666 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.184
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:14.428491   79871 ssh_runner.go:195] Run: crio config
	I0814 17:37:14.475385   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:37:14.475416   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:14.475433   79871 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:14.475464   79871 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.184 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-885666 NodeName:default-k8s-diff-port-885666 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.184"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.184 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:37:14.475645   79871 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.184
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-885666"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.184
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.184"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:14.475712   79871 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:37:14.485148   79871 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:14.485206   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:14.494161   79871 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0814 17:37:14.511050   79871 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:14.526395   79871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0814 17:37:14.543061   79871 ssh_runner.go:195] Run: grep 192.168.50.184	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:14.546747   79871 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.184	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:14.558022   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:14.671818   79871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:14.688541   79871 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666 for IP: 192.168.50.184
	I0814 17:37:14.688583   79871 certs.go:194] generating shared ca certs ...
	I0814 17:37:14.688609   79871 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:14.688823   79871 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:14.688889   79871 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:14.688903   79871 certs.go:256] generating profile certs ...
	I0814 17:37:14.689020   79871 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/client.key
	I0814 17:37:14.689132   79871 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.key.690c84bc
	I0814 17:37:14.689182   79871 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.key
	I0814 17:37:14.689310   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:14.689367   79871 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:14.689385   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:14.689422   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:14.689453   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:14.689479   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:14.689524   79871 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:14.690168   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:14.717906   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:14.759373   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:14.809775   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:14.834875   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0814 17:37:14.857860   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:14.886813   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:14.909803   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/default-k8s-diff-port-885666/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:14.935075   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:14.959759   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:14.985877   79871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:15.008456   79871 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:15.025602   79871 ssh_runner.go:195] Run: openssl version
	I0814 17:37:15.031392   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:15.041931   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.046475   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.046531   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:15.052377   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:15.063000   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:15.073463   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.078411   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.078471   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:15.083835   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:15.093753   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:15.103876   79871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.108487   79871 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.108559   79871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:15.114104   79871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:15.124285   79871 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:15.128515   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:15.134223   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:15.139700   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:15.145537   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:15.151287   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:15.156766   79871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:15.162149   79871 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-885666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-885666 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:15.162256   79871 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:15.162314   79871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:15.198745   79871 cri.go:89] found id: ""
	I0814 17:37:15.198814   79871 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:15.212198   79871 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:15.212216   79871 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:15.212256   79871 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:15.224275   79871 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:15.225218   79871 kubeconfig.go:125] found "default-k8s-diff-port-885666" server: "https://192.168.50.184:8444"
	I0814 17:37:15.227291   79871 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:15.237448   79871 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.184
	I0814 17:37:15.237494   79871 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:15.237509   79871 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:15.237563   79871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:15.281593   79871 cri.go:89] found id: ""
	I0814 17:37:15.281662   79871 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:15.298596   79871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:15.308702   79871 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:15.308723   79871 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:15.308779   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 17:37:15.318348   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:15.318409   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:15.330049   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 17:37:15.341283   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:15.341373   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:15.350584   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 17:37:15.361658   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:15.361718   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:15.373526   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 17:37:15.382360   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:15.382432   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:15.392477   79871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:15.402387   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:15.528954   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:11.580887   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:11.581466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:11.581500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:11.581392   81116 retry.go:31] will retry after 514.448726ms: waiting for machine to come up
	I0814 17:37:12.098137   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.098652   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.098740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.098642   81116 retry.go:31] will retry after 649.302617ms: waiting for machine to come up
	I0814 17:37:12.749349   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:12.749777   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:12.749803   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:12.749736   81116 retry.go:31] will retry after 897.486278ms: waiting for machine to come up
	I0814 17:37:13.649145   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:13.649666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:13.649698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:13.649621   81116 retry.go:31] will retry after 1.017213079s: waiting for machine to come up
	I0814 17:37:14.669187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:14.669715   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:14.669740   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:14.669679   81116 retry.go:31] will retry after 1.014709613s: waiting for machine to come up
	I0814 17:37:15.685748   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:15.686269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:15.686299   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:15.686217   81116 retry.go:31] will retry after 1.476940798s: waiting for machine to come up
	I0814 17:37:15.422067   79521 pod_ready.go:102] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:16.421689   79521 pod_ready.go:92] pod "etcd-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.421715   79521 pod_ready.go:81] duration metric: took 5.007451471s for pod "etcd-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.421724   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.426620   79521 pod_ready.go:92] pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.426644   79521 pod_ready.go:81] duration metric: took 4.912475ms for pod "kube-apiserver-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.426657   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.430754   79521 pod_ready.go:92] pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.430776   79521 pod_ready.go:81] duration metric: took 4.110475ms for pod "kube-controller-manager-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.430787   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.434469   79521 pod_ready.go:92] pod "kube-proxy-z8x9t" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.434487   79521 pod_ready.go:81] duration metric: took 3.693253ms for pod "kube-proxy-z8x9t" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.434498   79521 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.438294   79521 pod_ready.go:92] pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:16.438314   79521 pod_ready.go:81] duration metric: took 3.80298ms for pod "kube-scheduler-embed-certs-309673" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:16.438346   79521 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:18.445838   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:16.453075   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.676680   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.741803   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:16.831091   79871 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:16.831186   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:17.332193   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:17.831346   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:18.331620   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:18.832011   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:19.331528   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:19.348083   79871 api_server.go:72] duration metric: took 2.516990388s to wait for apiserver process to appear ...
	I0814 17:37:19.348119   79871 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:37:19.348144   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:17.164541   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:17.165093   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:17.165122   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:17.165017   81116 retry.go:31] will retry after 1.644726601s: waiting for machine to come up
	I0814 17:37:18.811628   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:18.812199   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:18.812224   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:18.812132   81116 retry.go:31] will retry after 2.740531885s: waiting for machine to come up
	I0814 17:37:21.576628   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:21.576657   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:21.576672   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:21.601355   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:37:21.601389   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:37:21.848481   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:21.855499   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:21.855530   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:22.349158   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:22.353345   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:37:22.353368   79871 api_server.go:103] status: https://192.168.50.184:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:37:22.848954   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:37:22.853912   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 200:
	ok
	I0814 17:37:22.865096   79871 api_server.go:141] control plane version: v1.31.0
	I0814 17:37:22.865127   79871 api_server.go:131] duration metric: took 3.516999004s to wait for apiserver health ...
	I0814 17:37:22.865138   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:37:22.865146   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:22.866812   79871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:37:20.446123   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:22.446518   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:24.945729   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:22.867939   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:37:22.881586   79871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:37:22.899815   79871 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:37:22.910873   79871 system_pods.go:59] 8 kube-system pods found
	I0814 17:37:22.910928   79871 system_pods.go:61] "coredns-6f6b679f8f-mxc9v" [d1b9d422-faff-4709-a375-f8783e75e18c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:37:22.910946   79871 system_pods.go:61] "etcd-default-k8s-diff-port-885666" [a5473465-a1c1-4413-8e77-74fb1eb398a4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:37:22.910956   79871 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-885666" [06c53e48-b156-42b1-b381-818f75821196] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:37:22.910966   79871 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-885666" [18a2d7fb-4e18-4880-8812-63d25934699b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:37:22.910977   79871 system_pods.go:61] "kube-proxy-4rrff" [14453cc8-da7d-4dd4-b7fa-89a26dbbf23b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0814 17:37:22.910995   79871 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-885666" [f0455f16-9a3e-4ede-8524-f701b1ab4ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0814 17:37:22.911005   79871 system_pods.go:61] "metrics-server-6867b74b74-qtzm8" [04c797ec-2e38-42a7-a023-5f60c451f780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:37:22.911020   79871 system_pods.go:61] "storage-provisioner" [88c2e8f0-0706-494a-8e83-0ede8f129040] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0814 17:37:22.911032   79871 system_pods.go:74] duration metric: took 11.192968ms to wait for pod list to return data ...
	I0814 17:37:22.911044   79871 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:37:22.915096   79871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:37:22.915128   79871 node_conditions.go:123] node cpu capacity is 2
	I0814 17:37:22.915140   79871 node_conditions.go:105] duration metric: took 4.087198ms to run NodePressure ...
	I0814 17:37:22.915165   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:23.204612   79871 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:37:23.209643   79871 kubeadm.go:739] kubelet initialised
	I0814 17:37:23.209665   79871 kubeadm.go:740] duration metric: took 5.023123ms waiting for restarted kubelet to initialise ...
	I0814 17:37:23.209673   79871 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:37:23.215957   79871 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.221969   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.221993   79871 pod_ready.go:81] duration metric: took 6.011053ms for pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.222008   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "coredns-6f6b679f8f-mxc9v" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.222014   79871 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.227119   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.227147   79871 pod_ready.go:81] duration metric: took 5.125006ms for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.227157   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.227163   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:23.231297   79871 pod_ready.go:97] node "default-k8s-diff-port-885666" hosting pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.231321   79871 pod_ready.go:81] duration metric: took 4.149023ms for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	E0814 17:37:23.231346   79871 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-885666" hosting pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-885666" has status "Ready":"False"
	I0814 17:37:23.231355   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:25.239956   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:21.555057   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:21.555530   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:21.555562   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:21.555484   81116 retry.go:31] will retry after 3.159225533s: waiting for machine to come up
	I0814 17:37:24.716173   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:24.716482   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | unable to find current IP address of domain old-k8s-version-505584 in network mk-old-k8s-version-505584
	I0814 17:37:24.716507   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | I0814 17:37:24.716451   81116 retry.go:31] will retry after 3.32732131s: waiting for machine to come up
	I0814 17:37:29.512066   79367 start.go:364] duration metric: took 55.26941078s to acquireMachinesLock for "no-preload-545149"
	I0814 17:37:29.512115   79367 start.go:96] Skipping create...Using existing machine configuration
	I0814 17:37:29.512123   79367 fix.go:54] fixHost starting: 
	I0814 17:37:29.512539   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:37:29.512574   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:37:29.529625   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0814 17:37:29.530074   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:37:29.530558   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:37:29.530585   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:37:29.530930   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:37:29.531149   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:29.531291   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:37:29.532999   79367 fix.go:112] recreateIfNeeded on no-preload-545149: state=Stopped err=<nil>
	I0814 17:37:29.533049   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	W0814 17:37:29.533224   79367 fix.go:138] unexpected machine state, will restart: <nil>
	I0814 17:37:29.535000   79367 out.go:177] * Restarting existing kvm2 VM for "no-preload-545149" ...
	I0814 17:37:27.445398   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:29.945246   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:27.737698   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:29.737890   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:28.045690   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046151   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has current primary IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.046177   80228 main.go:141] libmachine: (old-k8s-version-505584) Found IP for machine: 192.168.72.49
	I0814 17:37:28.046192   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserving static IP address...
	I0814 17:37:28.046500   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.046524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | skip adding static IP to network mk-old-k8s-version-505584 - found existing host DHCP lease matching {name: "old-k8s-version-505584", mac: "52:54:00:b6:27:ea", ip: "192.168.72.49"}
	I0814 17:37:28.046540   80228 main.go:141] libmachine: (old-k8s-version-505584) Reserved static IP address: 192.168.72.49
	I0814 17:37:28.046559   80228 main.go:141] libmachine: (old-k8s-version-505584) Waiting for SSH to be available...
	I0814 17:37:28.046571   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Getting to WaitForSSH function...
	I0814 17:37:28.048709   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049082   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.049106   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.049252   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH client type: external
	I0814 17:37:28.049285   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa (-rw-------)
	I0814 17:37:28.049325   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:28.049342   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | About to run SSH command:
	I0814 17:37:28.049356   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | exit 0
	I0814 17:37:28.179844   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:28.180193   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetConfigRaw
	I0814 17:37:28.180865   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.183617   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184074   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.184118   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.184367   80228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/config.json ...
	I0814 17:37:28.184641   80228 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:28.184663   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:28.184891   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.187158   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187517   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.187547   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.187696   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.187857   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188027   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.188178   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.188320   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.188570   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.188587   80228 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:28.303564   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:28.303597   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.303831   80228 buildroot.go:166] provisioning hostname "old-k8s-version-505584"
	I0814 17:37:28.303856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.304021   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.306826   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307180   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.307210   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.307415   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.307608   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307769   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.307915   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.308131   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.308336   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.308354   80228 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-505584 && echo "old-k8s-version-505584" | sudo tee /etc/hostname
	I0814 17:37:28.434224   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-505584
	
	I0814 17:37:28.434261   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.437350   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437633   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.437666   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.437856   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.438077   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438245   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.438395   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.438623   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.438832   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.438857   80228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-505584' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-505584/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-505584' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:28.564784   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:28.564815   80228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:28.564858   80228 buildroot.go:174] setting up certificates
	I0814 17:37:28.564872   80228 provision.go:84] configureAuth start
	I0814 17:37:28.564884   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetMachineName
	I0814 17:37:28.565188   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:28.568217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.568698   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.568731   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.569013   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.571364   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571780   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.571805   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.571961   80228 provision.go:143] copyHostCerts
	I0814 17:37:28.572023   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:28.572032   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:28.572076   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:28.572176   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:28.572184   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:28.572206   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:28.572275   80228 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:28.572284   80228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:28.572337   80228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:28.572435   80228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-505584 san=[127.0.0.1 192.168.72.49 localhost minikube old-k8s-version-505584]
	I0814 17:37:28.804798   80228 provision.go:177] copyRemoteCerts
	I0814 17:37:28.804853   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:28.804879   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.807967   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808269   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.808302   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.808458   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.808690   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.808874   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.809001   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:28.900346   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:28.926959   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0814 17:37:28.955373   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0814 17:37:28.984436   80228 provision.go:87] duration metric: took 419.552519ms to configureAuth
	I0814 17:37:28.984463   80228 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:28.984630   80228 config.go:182] Loaded profile config "old-k8s-version-505584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0814 17:37:28.984713   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:28.987602   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988077   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:28.988107   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:28.988237   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:28.988486   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988641   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:28.988768   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:28.988986   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:28.989209   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:28.989234   80228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:29.262630   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:29.262656   80228 machine.go:97] duration metric: took 1.078000469s to provisionDockerMachine
	I0814 17:37:29.262669   80228 start.go:293] postStartSetup for "old-k8s-version-505584" (driver="kvm2")
	I0814 17:37:29.262683   80228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:29.262704   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.263051   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:29.263082   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.266020   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266466   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.266495   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.266720   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.266919   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.267093   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.267253   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.354027   80228 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:29.358196   80228 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:29.358224   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:29.358304   80228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:29.358416   80228 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:29.358543   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:29.367802   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:29.392802   80228 start.go:296] duration metric: took 130.117007ms for postStartSetup
	I0814 17:37:29.392846   80228 fix.go:56] duration metric: took 20.068754346s for fixHost
	I0814 17:37:29.392871   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.395638   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396032   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.396064   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.396251   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.396516   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396698   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.396893   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.397155   80228 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:29.397326   80228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.49 22 <nil> <nil>}
	I0814 17:37:29.397340   80228 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:29.511889   80228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657049.468340520
	
	I0814 17:37:29.511913   80228 fix.go:216] guest clock: 1723657049.468340520
	I0814 17:37:29.511923   80228 fix.go:229] Guest: 2024-08-14 17:37:29.46834052 +0000 UTC Remote: 2024-08-14 17:37:29.392851248 +0000 UTC m=+223.104093144 (delta=75.489272ms)
	I0814 17:37:29.511983   80228 fix.go:200] guest clock delta is within tolerance: 75.489272ms
	I0814 17:37:29.511996   80228 start.go:83] releasing machines lock for "old-k8s-version-505584", held for 20.187937886s
	I0814 17:37:29.512031   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.512333   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:29.515152   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515487   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.515524   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.515735   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516299   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516497   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .DriverName
	I0814 17:37:29.516643   80228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:29.516723   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.516727   80228 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:29.516752   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHHostname
	I0814 17:37:29.519600   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.519751   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520017   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520045   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520164   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:29.520187   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:29.520192   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520341   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHPort
	I0814 17:37:29.520423   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520520   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHKeyPath
	I0814 17:37:29.520588   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520646   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetSSHUsername
	I0814 17:37:29.520718   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.520780   80228 sshutil.go:53] new ssh client: &{IP:192.168.72.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/old-k8s-version-505584/id_rsa Username:docker}
	I0814 17:37:29.642824   80228 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:29.648834   80228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:29.795482   80228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:29.801407   80228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:29.801486   80228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:29.821662   80228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:29.821684   80228 start.go:495] detecting cgroup driver to use...
	I0814 17:37:29.821761   80228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:29.843829   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:29.859505   80228 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:29.859590   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:29.873790   80228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:29.889295   80228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:30.035909   80228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:30.209521   80228 docker.go:233] disabling docker service ...
	I0814 17:37:30.209574   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:30.226980   80228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:30.241678   80228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:30.375116   80228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:30.498357   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:30.512272   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:30.533062   80228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0814 17:37:30.533130   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.543595   80228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:30.543664   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.554139   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.564417   80228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:30.574627   80228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:30.584957   80228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:30.594667   80228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:30.594720   80228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:30.606826   80228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:30.621990   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:30.758992   80228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:30.915494   80228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:30.915572   80228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:30.920692   80228 start.go:563] Will wait 60s for crictl version
	I0814 17:37:30.920767   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:30.924365   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:30.964662   80228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:30.964756   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:30.995534   80228 ssh_runner.go:195] Run: crio --version
	I0814 17:37:31.025400   80228 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0814 17:37:31.026943   80228 main.go:141] libmachine: (old-k8s-version-505584) Calling .GetIP
	I0814 17:37:31.030217   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030630   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:27:ea", ip: ""} in network mk-old-k8s-version-505584: {Iface:virbr4 ExpiryTime:2024-08-14 18:37:20 +0000 UTC Type:0 Mac:52:54:00:b6:27:ea Iaid: IPaddr:192.168.72.49 Prefix:24 Hostname:old-k8s-version-505584 Clientid:01:52:54:00:b6:27:ea}
	I0814 17:37:31.030665   80228 main.go:141] libmachine: (old-k8s-version-505584) DBG | domain old-k8s-version-505584 has defined IP address 192.168.72.49 and MAC address 52:54:00:b6:27:ea in network mk-old-k8s-version-505584
	I0814 17:37:31.030943   80228 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:31.034960   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:31.047742   80228 kubeadm.go:883] updating cluster {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:31.047864   80228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 17:37:31.047926   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:31.092203   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:31.092278   80228 ssh_runner.go:195] Run: which lz4
	I0814 17:37:31.096471   80228 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0814 17:37:31.100610   80228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0814 17:37:31.100642   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0814 17:37:29.536310   79367 main.go:141] libmachine: (no-preload-545149) Calling .Start
	I0814 17:37:29.536513   79367 main.go:141] libmachine: (no-preload-545149) Ensuring networks are active...
	I0814 17:37:29.537431   79367 main.go:141] libmachine: (no-preload-545149) Ensuring network default is active
	I0814 17:37:29.537935   79367 main.go:141] libmachine: (no-preload-545149) Ensuring network mk-no-preload-545149 is active
	I0814 17:37:29.538468   79367 main.go:141] libmachine: (no-preload-545149) Getting domain xml...
	I0814 17:37:29.539383   79367 main.go:141] libmachine: (no-preload-545149) Creating domain...
	I0814 17:37:30.863155   79367 main.go:141] libmachine: (no-preload-545149) Waiting to get IP...
	I0814 17:37:30.864257   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:30.864722   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:30.864812   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:30.864695   81248 retry.go:31] will retry after 244.342973ms: waiting for machine to come up
	I0814 17:37:31.111211   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.111784   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.111806   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.111735   81248 retry.go:31] will retry after 277.033145ms: waiting for machine to come up
	I0814 17:37:31.390071   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.390511   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.390554   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.390429   81248 retry.go:31] will retry after 320.225451ms: waiting for machine to come up
	I0814 17:37:31.949069   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:34.445833   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:31.741110   79871 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:33.239418   79871 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:33.239449   79871 pod_ready.go:81] duration metric: took 10.008084028s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.239462   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4rrff" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.244600   79871 pod_ready.go:92] pod "kube-proxy-4rrff" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:33.244628   79871 pod_ready.go:81] duration metric: took 5.157296ms for pod "kube-proxy-4rrff" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:33.244648   79871 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:35.253613   79871 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:37:35.253643   79871 pod_ready.go:81] duration metric: took 2.008985731s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:35.253657   79871 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" ...
	I0814 17:37:32.582064   80228 crio.go:462] duration metric: took 1.485645107s to copy over tarball
	I0814 17:37:32.582151   80228 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0814 17:37:35.556765   80228 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.974581109s)
	I0814 17:37:35.556795   80228 crio.go:469] duration metric: took 2.9747s to extract the tarball
	I0814 17:37:35.556802   80228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0814 17:37:35.599129   80228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:35.632752   80228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0814 17:37:35.632775   80228 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:35.632831   80228 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.632864   80228 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.632846   80228 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.632892   80228 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0814 17:37:35.632911   80228 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.632944   80228 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.633112   80228 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634793   80228 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.634821   80228 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0814 17:37:35.634824   80228 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:35.634885   80228 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.634910   80228 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.635009   80228 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.635082   80228 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:35.635265   80228 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:35.905566   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0814 17:37:35.953168   80228 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0814 17:37:35.953210   80228 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0814 17:37:35.953260   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:35.957961   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:35.978859   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:35.978920   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:35.988556   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:35.993281   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:35.997933   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.018501   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.043527   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.146739   80228 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0814 17:37:36.146812   80228 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0814 17:37:36.146832   80228 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.146852   80228 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.146881   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.146891   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163832   80228 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0814 17:37:36.163856   80228 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0814 17:37:36.163877   80228 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.163889   80228 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.163923   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163924   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.163927   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0814 17:37:36.172482   80228 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0814 17:37:36.172530   80228 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.172599   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.195157   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.195208   80228 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0814 17:37:36.195165   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.195242   80228 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.195245   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.195277   80228 ssh_runner.go:195] Run: which crictl
	I0814 17:37:36.237454   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0814 17:37:36.237519   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.237549   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.292614   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.306771   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.306794   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:31.712067   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:31.712601   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:31.712630   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:31.712575   81248 retry.go:31] will retry after 546.687472ms: waiting for machine to come up
	I0814 17:37:32.261457   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:32.261921   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:32.261950   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:32.261854   81248 retry.go:31] will retry after 484.345236ms: waiting for machine to come up
	I0814 17:37:32.747475   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:32.748118   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:32.748149   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:32.748060   81248 retry.go:31] will retry after 899.564198ms: waiting for machine to come up
	I0814 17:37:33.649684   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:33.650206   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:33.650234   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:33.650155   81248 retry.go:31] will retry after 1.039934932s: waiting for machine to come up
	I0814 17:37:34.691741   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:34.692197   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:34.692220   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:34.692169   81248 retry.go:31] will retry after 925.402437ms: waiting for machine to come up
	I0814 17:37:35.618737   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:35.619169   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:35.619200   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:35.619102   81248 retry.go:31] will retry after 1.401066913s: waiting for machine to come up
	I0814 17:37:36.447039   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:38.945321   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:37.260912   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:39.759967   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:36.321893   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.339836   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.339929   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.426588   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0814 17:37:36.426659   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0814 17:37:36.433149   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.469717   80228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:36.477512   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0814 17:37:36.477583   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0814 17:37:36.477761   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0814 17:37:36.538635   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0814 17:37:36.557712   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0814 17:37:36.558304   80228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0814 17:37:36.700263   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0814 17:37:36.700333   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0814 17:37:36.700410   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0814 17:37:36.700481   80228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0814 17:37:36.700527   80228 cache_images.go:92] duration metric: took 1.067740607s to LoadCachedImages
	W0814 17:37:36.700602   80228 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0814 17:37:36.700623   80228 kubeadm.go:934] updating node { 192.168.72.49 8443 v1.20.0 crio true true} ...
	I0814 17:37:36.700757   80228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-505584 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:37:36.700846   80228 ssh_runner.go:195] Run: crio config
	I0814 17:37:36.748814   80228 cni.go:84] Creating CNI manager for ""
	I0814 17:37:36.748843   80228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:37:36.748860   80228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:37:36.748885   80228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.49 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-505584 NodeName:old-k8s-version-505584 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0814 17:37:36.749053   80228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-505584"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:37:36.749129   80228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0814 17:37:36.760058   80228 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:37:36.760131   80228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:37:36.769388   80228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0814 17:37:36.786594   80228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:37:36.807695   80228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0814 17:37:36.825609   80228 ssh_runner.go:195] Run: grep 192.168.72.49	control-plane.minikube.internal$ /etc/hosts
	I0814 17:37:36.829296   80228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.49	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:36.841882   80228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:36.976199   80228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:37:36.993682   80228 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584 for IP: 192.168.72.49
	I0814 17:37:36.993707   80228 certs.go:194] generating shared ca certs ...
	I0814 17:37:36.993728   80228 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:36.993924   80228 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:37:36.993985   80228 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:37:36.993998   80228 certs.go:256] generating profile certs ...
	I0814 17:37:36.994115   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/client.key
	I0814 17:37:36.994206   80228 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key.c375770f
	I0814 17:37:36.994261   80228 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key
	I0814 17:37:36.994428   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:37:36.994478   80228 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:37:36.994492   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:37:36.994522   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:37:36.994557   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:37:36.994603   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:37:36.994661   80228 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:36.995558   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:37:37.043910   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:37:37.073810   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:37:37.097939   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:37:37.124449   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0814 17:37:37.154747   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:37:37.179474   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:37:37.204471   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/old-k8s-version-505584/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:37:37.228579   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:37:37.266929   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:37:37.292912   80228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:37:37.316803   80228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:37:37.332934   80228 ssh_runner.go:195] Run: openssl version
	I0814 17:37:37.339316   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:37:37.349829   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354230   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.354297   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:37:37.360089   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:37:37.371417   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:37:37.381777   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385894   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.385955   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:37:37.391826   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:37:37.402049   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:37:37.412038   80228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416395   80228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.416448   80228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:37:37.421794   80228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:37:37.431868   80228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:37:37.436305   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:37:37.442838   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:37:37.448991   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:37:37.454769   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:37:37.460381   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:37:37.466406   80228 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:37:37.472466   80228 kubeadm.go:392] StartCluster: {Name:old-k8s-version-505584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-505584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.49 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:37:37.472584   80228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:37:37.472636   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.508256   80228 cri.go:89] found id: ""
	I0814 17:37:37.508323   80228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:37:37.518824   80228 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:37:37.518856   80228 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:37:37.518941   80228 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:37:37.529328   80228 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:37:37.530242   80228 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-505584" does not appear in /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:37:37.530890   80228 kubeconfig.go:62] /home/jenkins/minikube-integration/19446-13977/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-505584" cluster setting kubeconfig missing "old-k8s-version-505584" context setting]
	I0814 17:37:37.531922   80228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:37:37.539843   80228 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:37:37.550012   80228 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.49
	I0814 17:37:37.550051   80228 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:37:37.550063   80228 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:37:37.550113   80228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:37:37.590226   80228 cri.go:89] found id: ""
	I0814 17:37:37.590307   80228 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:37:37.606242   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:37:37.615340   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:37:37.615377   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:37:37.615436   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:37:37.623996   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:37:37.624063   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:37:37.633356   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:37:37.642888   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:37:37.642958   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:37:37.652532   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.661607   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:37:37.661679   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:37:37.670876   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:37:37.679780   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:37:37.679846   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:37:37.690044   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:37:37.699617   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:37.813799   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.666487   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:38.901307   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.029983   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:37:39.139056   80228 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:37:39.139133   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:39.639191   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.139315   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:40.639292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:41.139421   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:37.021766   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:37.022253   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:37.022282   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:37.022216   81248 retry.go:31] will retry after 2.184222941s: waiting for machine to come up
	I0814 17:37:39.209777   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:39.210239   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:39.210265   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:39.210203   81248 retry.go:31] will retry after 2.903962154s: waiting for machine to come up
	I0814 17:37:41.445413   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:43.949816   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:41.760985   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:44.260273   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:41.639312   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.139387   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.639981   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.139499   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:43.639391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.139425   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:44.639677   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.139466   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:45.639426   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:46.140065   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:42.116682   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:42.117116   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:42.117154   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:42.117086   81248 retry.go:31] will retry after 3.387467992s: waiting for machine to come up
	I0814 17:37:45.505680   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:45.506034   79367 main.go:141] libmachine: (no-preload-545149) DBG | unable to find current IP address of domain no-preload-545149 in network mk-no-preload-545149
	I0814 17:37:45.506056   79367 main.go:141] libmachine: (no-preload-545149) DBG | I0814 17:37:45.505986   81248 retry.go:31] will retry after 2.944973353s: waiting for machine to come up
	I0814 17:37:46.443772   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:48.445046   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:46.759297   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:49.260881   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:46.640043   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.139213   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:47.639848   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.140080   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.639961   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.139473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:49.639212   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.139781   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:50.640028   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.140140   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:48.452516   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.453064   79367 main.go:141] libmachine: (no-preload-545149) Found IP for machine: 192.168.39.162
	I0814 17:37:48.453092   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has current primary IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.453099   79367 main.go:141] libmachine: (no-preload-545149) Reserving static IP address...
	I0814 17:37:48.453513   79367 main.go:141] libmachine: (no-preload-545149) Reserved static IP address: 192.168.39.162
	I0814 17:37:48.453564   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "no-preload-545149", mac: "52:54:00:d0:bd:d7", ip: "192.168.39.162"} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.453578   79367 main.go:141] libmachine: (no-preload-545149) Waiting for SSH to be available...
	I0814 17:37:48.453608   79367 main.go:141] libmachine: (no-preload-545149) DBG | skip adding static IP to network mk-no-preload-545149 - found existing host DHCP lease matching {name: "no-preload-545149", mac: "52:54:00:d0:bd:d7", ip: "192.168.39.162"}
	I0814 17:37:48.453630   79367 main.go:141] libmachine: (no-preload-545149) DBG | Getting to WaitForSSH function...
	I0814 17:37:48.455959   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.456279   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.456304   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.456429   79367 main.go:141] libmachine: (no-preload-545149) DBG | Using SSH client type: external
	I0814 17:37:48.456449   79367 main.go:141] libmachine: (no-preload-545149) DBG | Using SSH private key: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa (-rw-------)
	I0814 17:37:48.456490   79367 main.go:141] libmachine: (no-preload-545149) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0814 17:37:48.456506   79367 main.go:141] libmachine: (no-preload-545149) DBG | About to run SSH command:
	I0814 17:37:48.456520   79367 main.go:141] libmachine: (no-preload-545149) DBG | exit 0
	I0814 17:37:48.579489   79367 main.go:141] libmachine: (no-preload-545149) DBG | SSH cmd err, output: <nil>: 
	I0814 17:37:48.579924   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetConfigRaw
	I0814 17:37:48.580615   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:48.583202   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.583545   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.583592   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.583857   79367 profile.go:143] Saving config to /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/config.json ...
	I0814 17:37:48.584093   79367 machine.go:94] provisionDockerMachine start ...
	I0814 17:37:48.584113   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:48.584340   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.586712   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.587086   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.587107   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.587259   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.587441   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.587593   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.587720   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.587869   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.588029   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.588040   79367 main.go:141] libmachine: About to run SSH command:
	hostname
	I0814 17:37:48.691255   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0814 17:37:48.691290   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.691555   79367 buildroot.go:166] provisioning hostname "no-preload-545149"
	I0814 17:37:48.691593   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.691798   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.694492   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.694768   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.694797   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.694907   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.695084   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.695248   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.695396   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.695556   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.695777   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.695798   79367 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-545149 && echo "no-preload-545149" | sudo tee /etc/hostname
	I0814 17:37:48.813509   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-545149
	
	I0814 17:37:48.813537   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.816304   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.816698   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.816732   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.816884   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:48.817057   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.817265   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:48.817393   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:48.817586   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:48.817809   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:48.817836   79367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-545149' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-545149/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-545149' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0814 17:37:48.927482   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0814 17:37:48.927512   79367 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19446-13977/.minikube CaCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19446-13977/.minikube}
	I0814 17:37:48.927540   79367 buildroot.go:174] setting up certificates
	I0814 17:37:48.927551   79367 provision.go:84] configureAuth start
	I0814 17:37:48.927567   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetMachineName
	I0814 17:37:48.927831   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:48.930532   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.930879   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.930906   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.931104   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:48.933420   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.933754   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:48.933783   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:48.933893   79367 provision.go:143] copyHostCerts
	I0814 17:37:48.933968   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem, removing ...
	I0814 17:37:48.933979   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem
	I0814 17:37:48.934040   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/cert.pem (1123 bytes)
	I0814 17:37:48.934146   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem, removing ...
	I0814 17:37:48.934156   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem
	I0814 17:37:48.934186   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/key.pem (1679 bytes)
	I0814 17:37:48.934262   79367 exec_runner.go:144] found /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem, removing ...
	I0814 17:37:48.934271   79367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem
	I0814 17:37:48.934302   79367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19446-13977/.minikube/ca.pem (1078 bytes)
	I0814 17:37:48.934377   79367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem org=jenkins.no-preload-545149 san=[127.0.0.1 192.168.39.162 localhost minikube no-preload-545149]
	I0814 17:37:49.287517   79367 provision.go:177] copyRemoteCerts
	I0814 17:37:49.287580   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0814 17:37:49.287607   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.290280   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.290667   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.290690   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.290856   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.291063   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.291180   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.291304   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.374716   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0814 17:37:49.398652   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0814 17:37:49.422885   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0814 17:37:49.448774   79367 provision.go:87] duration metric: took 521.207251ms to configureAuth
	I0814 17:37:49.448800   79367 buildroot.go:189] setting minikube options for container-runtime
	I0814 17:37:49.448972   79367 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:37:49.449064   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.452034   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.452373   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.452403   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.452604   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.452859   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.453058   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.453217   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.453388   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:49.453579   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:49.453601   79367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0814 17:37:49.711896   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0814 17:37:49.711922   79367 machine.go:97] duration metric: took 1.127817152s to provisionDockerMachine
	I0814 17:37:49.711933   79367 start.go:293] postStartSetup for "no-preload-545149" (driver="kvm2")
	I0814 17:37:49.711942   79367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0814 17:37:49.711977   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.712299   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0814 17:37:49.712324   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.714736   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.715059   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.715097   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.715232   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.715428   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.715616   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.715769   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.797746   79367 ssh_runner.go:195] Run: cat /etc/os-release
	I0814 17:37:49.801764   79367 info.go:137] Remote host: Buildroot 2023.02.9
	I0814 17:37:49.801794   79367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/addons for local assets ...
	I0814 17:37:49.801863   79367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19446-13977/.minikube/files for local assets ...
	I0814 17:37:49.801960   79367 filesync.go:149] local asset: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem -> 211772.pem in /etc/ssl/certs
	I0814 17:37:49.802081   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0814 17:37:49.811506   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:37:49.834762   79367 start.go:296] duration metric: took 122.81358ms for postStartSetup
	I0814 17:37:49.834812   79367 fix.go:56] duration metric: took 20.32268926s for fixHost
	I0814 17:37:49.834837   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.837418   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.837739   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.837768   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.837903   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.838114   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.838292   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.838438   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.838643   79367 main.go:141] libmachine: Using SSH client type: native
	I0814 17:37:49.838838   79367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I0814 17:37:49.838850   79367 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0814 17:37:49.944936   79367 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723657069.919883473
	
	I0814 17:37:49.944965   79367 fix.go:216] guest clock: 1723657069.919883473
	I0814 17:37:49.944975   79367 fix.go:229] Guest: 2024-08-14 17:37:49.919883473 +0000 UTC Remote: 2024-08-14 17:37:49.834818813 +0000 UTC m=+358.184638535 (delta=85.06466ms)
	I0814 17:37:49.945005   79367 fix.go:200] guest clock delta is within tolerance: 85.06466ms
	I0814 17:37:49.945017   79367 start.go:83] releasing machines lock for "no-preload-545149", held for 20.432923283s
	I0814 17:37:49.945044   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.945291   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:49.947847   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.948269   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.948295   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.948500   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949082   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949262   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:37:49.949347   79367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0814 17:37:49.949406   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.949517   79367 ssh_runner.go:195] Run: cat /version.json
	I0814 17:37:49.949541   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:37:49.952281   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952328   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952667   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.952692   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.952833   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.952836   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:49.952895   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:49.953037   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:37:49.953075   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.953201   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.953212   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:37:49.953400   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:37:49.953412   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:49.953543   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:37:50.072094   79367 ssh_runner.go:195] Run: systemctl --version
	I0814 17:37:50.080210   79367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0814 17:37:50.227736   79367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0814 17:37:50.233533   79367 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0814 17:37:50.233597   79367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0814 17:37:50.249452   79367 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0814 17:37:50.249474   79367 start.go:495] detecting cgroup driver to use...
	I0814 17:37:50.249552   79367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0814 17:37:50.265740   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0814 17:37:50.278769   79367 docker.go:217] disabling cri-docker service (if available) ...
	I0814 17:37:50.278833   79367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0814 17:37:50.291625   79367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0814 17:37:50.304529   79367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0814 17:37:50.415405   79367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0814 17:37:50.556016   79367 docker.go:233] disabling docker service ...
	I0814 17:37:50.556092   79367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0814 17:37:50.570197   79367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0814 17:37:50.583068   79367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0814 17:37:50.721414   79367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0814 17:37:50.850890   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0814 17:37:50.864530   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0814 17:37:50.882021   79367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0814 17:37:50.882097   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.891490   79367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0814 17:37:50.891564   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.901437   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.911316   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.920935   79367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0814 17:37:50.930571   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.940106   79367 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.957351   79367 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0814 17:37:50.967222   79367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0814 17:37:50.976120   79367 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0814 17:37:50.976170   79367 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0814 17:37:50.990922   79367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0814 17:37:51.000086   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:37:51.116655   79367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0814 17:37:51.246182   79367 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0814 17:37:51.246265   79367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0814 17:37:51.250838   79367 start.go:563] Will wait 60s for crictl version
	I0814 17:37:51.250900   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.254633   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0814 17:37:51.299890   79367 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0814 17:37:51.299992   79367 ssh_runner.go:195] Run: crio --version
	I0814 17:37:51.328292   79367 ssh_runner.go:195] Run: crio --version
	I0814 17:37:51.360415   79367 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0814 17:37:51.361536   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetIP
	I0814 17:37:51.364443   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:51.364884   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:37:51.364914   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:37:51.365112   79367 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0814 17:37:51.368941   79367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:37:51.380519   79367 kubeadm.go:883] updating cluster {Name:no-preload-545149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0814 17:37:51.380668   79367 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 17:37:51.380705   79367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0814 17:37:51.413314   79367 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0814 17:37:51.413346   79367 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0814 17:37:51.413417   79367 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.413435   79367 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.413452   79367 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.413395   79367 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:51.413473   79367 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0814 17:37:51.413440   79367 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.413521   79367 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.413529   79367 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.414920   79367 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:51.414940   79367 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0814 17:37:51.414983   79367 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.415006   79367 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.415010   79367 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.414982   79367 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.415070   79367 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.415100   79367 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.664642   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.686463   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:50.445457   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:52.945915   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:51.762809   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:54.259593   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:51.639969   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.139918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:52.639403   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.139921   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:53.640224   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.140272   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:54.639242   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.139908   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:55.639233   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:56.139955   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:51.699627   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0814 17:37:51.718031   79367 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0814 17:37:51.718085   79367 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.718133   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.736370   79367 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0814 17:37:51.736408   79367 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.736454   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.779229   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.800986   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.819343   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.841240   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.853614   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.853650   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.853753   79367 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0814 17:37:51.853798   79367 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.853842   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.866717   79367 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0814 17:37:51.866757   79367 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.866807   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.908593   79367 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0814 17:37:51.908644   79367 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.908701   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.936701   79367 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0814 17:37:51.936737   79367 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:51.936784   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:51.944882   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:51.944962   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:51.944983   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:51.945008   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:51.945070   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:51.945089   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.063281   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:52.080543   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:52.080556   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.080574   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0814 17:37:52.080629   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0814 17:37:52.080647   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:52.126573   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0814 17:37:52.205600   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0814 17:37:52.205623   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0814 17:37:52.236617   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0814 17:37:52.236678   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0814 17:37:52.236757   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.237083   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0814 17:37:52.237161   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:52.238804   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0814 17:37:52.238891   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:52.294945   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0814 17:37:52.295018   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0814 17:37:52.295064   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:37:52.295103   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0814 17:37:52.295127   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.295189   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0814 17:37:52.295110   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:37:52.302365   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0814 17:37:52.302388   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0814 17:37:52.302423   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0814 17:37:52.302472   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:37:52.306933   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0814 17:37:52.307107   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0814 17:37:52.309298   79367 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:54.271998   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.976780716s)
	I0814 17:37:54.272032   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0814 17:37:54.272053   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:54.272063   79367 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.962736886s)
	I0814 17:37:54.272100   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0814 17:37:54.271998   79367 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (1.969503874s)
	I0814 17:37:54.272150   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0814 17:37:54.272105   79367 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0814 17:37:54.272192   79367 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:54.272250   79367 ssh_runner.go:195] Run: which crictl
	I0814 17:37:56.021236   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (1.749108117s)
	I0814 17:37:56.021281   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0814 17:37:56.021288   79367 ssh_runner.go:235] Completed: which crictl: (1.749013682s)
	I0814 17:37:56.021309   79367 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:56.021370   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0814 17:37:56.021386   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:55.445017   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:57.445204   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:59.945329   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:56.260666   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:58.760907   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:37:56.639799   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.140184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:57.639918   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.139310   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:58.639393   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.140139   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.639614   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.139472   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:00.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.139946   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:37:59.830150   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.808753337s)
	I0814 17:37:59.830181   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0814 17:37:59.830205   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:37:59.830208   79367 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.80880721s)
	I0814 17:37:59.830253   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:37:59.830255   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0814 17:38:02.444320   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:04.444667   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:01.260951   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:03.759895   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:01.639422   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.139858   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:02.639412   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.140047   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:03.640170   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.139779   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:04.639728   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.139343   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:05.640249   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.139448   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:01.796675   79367 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.966400982s)
	I0814 17:38:01.796690   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.966414051s)
	I0814 17:38:01.796708   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0814 17:38:01.796735   79367 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:38:01.796757   79367 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:38:01.796796   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0814 17:38:01.841898   79367 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0814 17:38:01.841994   79367 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:03.571965   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.775142217s)
	I0814 17:38:03.571991   79367 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (1.729967853s)
	I0814 17:38:03.572002   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0814 17:38:03.572019   79367 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0814 17:38:03.572028   79367 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:03.572079   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0814 17:38:04.422670   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0814 17:38:04.422705   79367 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:38:04.422760   79367 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0814 17:38:06.277419   79367 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.854632861s)
	I0814 17:38:06.277457   79367 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19446-13977/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0814 17:38:06.277488   79367 cache_images.go:123] Successfully loaded all cached images
	I0814 17:38:06.277494   79367 cache_images.go:92] duration metric: took 14.864134758s to LoadCachedImages
	I0814 17:38:06.277504   79367 kubeadm.go:934] updating node { 192.168.39.162 8443 v1.31.0 crio true true} ...
	I0814 17:38:06.277628   79367 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-545149 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0814 17:38:06.277690   79367 ssh_runner.go:195] Run: crio config
	I0814 17:38:06.337971   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:38:06.337990   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:38:06.337999   79367 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0814 17:38:06.338019   79367 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-545149 NodeName:no-preload-545149 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0814 17:38:06.338148   79367 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-545149"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0814 17:38:06.338222   79367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0814 17:38:06.348156   79367 binaries.go:44] Found k8s binaries, skipping transfer
	I0814 17:38:06.348219   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0814 17:38:06.356784   79367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0814 17:38:06.372439   79367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0814 17:38:06.388610   79367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0814 17:38:06.405084   79367 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I0814 17:38:06.408753   79367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0814 17:38:06.420313   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:38:06.546115   79367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:38:06.563747   79367 certs.go:68] Setting up /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149 for IP: 192.168.39.162
	I0814 17:38:06.563776   79367 certs.go:194] generating shared ca certs ...
	I0814 17:38:06.563799   79367 certs.go:226] acquiring lock for ca certs: {Name:mk48ea4eab2c47d5c81779d518bcd8aff8b52d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:38:06.563973   79367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key
	I0814 17:38:06.564035   79367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key
	I0814 17:38:06.564058   79367 certs.go:256] generating profile certs ...
	I0814 17:38:06.564150   79367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/client.key
	I0814 17:38:06.564207   79367 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.key.d0704694
	I0814 17:38:06.564241   79367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.key
	I0814 17:38:06.564349   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem (1338 bytes)
	W0814 17:38:06.564377   79367 certs.go:480] ignoring /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177_empty.pem, impossibly tiny 0 bytes
	I0814 17:38:06.564386   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca-key.pem (1679 bytes)
	I0814 17:38:06.564411   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/ca.pem (1078 bytes)
	I0814 17:38:06.564437   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/cert.pem (1123 bytes)
	I0814 17:38:06.564459   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/certs/key.pem (1679 bytes)
	I0814 17:38:06.564497   79367 certs.go:484] found cert: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem (1708 bytes)
	I0814 17:38:06.565269   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0814 17:38:06.592622   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0814 17:38:06.619148   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0814 17:38:06.646169   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0814 17:38:06.682399   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0814 17:38:06.446354   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:08.948005   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:05.760991   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:08.260189   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:10.260816   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:06.639416   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.140176   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:07.639682   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.140063   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:08.640014   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.139435   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.639256   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.139949   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:10.640283   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:11.139394   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:06.714195   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0814 17:38:06.750431   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0814 17:38:06.772702   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/no-preload-545149/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0814 17:38:06.793932   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0814 17:38:06.815601   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/certs/21177.pem --> /usr/share/ca-certificates/21177.pem (1338 bytes)
	I0814 17:38:06.837187   79367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/ssl/certs/211772.pem --> /usr/share/ca-certificates/211772.pem (1708 bytes)
	I0814 17:38:06.858175   79367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0814 17:38:06.876187   79367 ssh_runner.go:195] Run: openssl version
	I0814 17:38:06.881909   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0814 17:38:06.892057   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.896191   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 14 16:10 /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.896251   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0814 17:38:06.901630   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0814 17:38:06.910888   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21177.pem && ln -fs /usr/share/ca-certificates/21177.pem /etc/ssl/certs/21177.pem"
	I0814 17:38:06.920223   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.924480   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 14 16:22 /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.924527   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21177.pem
	I0814 17:38:06.929591   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/21177.pem /etc/ssl/certs/51391683.0"
	I0814 17:38:06.939072   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/211772.pem && ln -fs /usr/share/ca-certificates/211772.pem /etc/ssl/certs/211772.pem"
	I0814 17:38:06.949970   79367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.954288   79367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 14 16:22 /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.954339   79367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/211772.pem
	I0814 17:38:06.959551   79367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/211772.pem /etc/ssl/certs/3ec20f2e.0"
	I0814 17:38:06.969505   79367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0814 17:38:06.973905   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0814 17:38:06.980211   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0814 17:38:06.986779   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0814 17:38:06.992220   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0814 17:38:06.997446   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0814 17:38:07.002681   79367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0814 17:38:07.008037   79367 kubeadm.go:392] StartCluster: {Name:no-preload-545149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-545149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 17:38:07.008131   79367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0814 17:38:07.008188   79367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:38:07.043144   79367 cri.go:89] found id: ""
	I0814 17:38:07.043214   79367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0814 17:38:07.052215   79367 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0814 17:38:07.052233   79367 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0814 17:38:07.052281   79367 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0814 17:38:07.060618   79367 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0814 17:38:07.061557   79367 kubeconfig.go:125] found "no-preload-545149" server: "https://192.168.39.162:8443"
	I0814 17:38:07.063554   79367 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0814 17:38:07.072026   79367 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I0814 17:38:07.072064   79367 kubeadm.go:1160] stopping kube-system containers ...
	I0814 17:38:07.072075   79367 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0814 17:38:07.072117   79367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0814 17:38:07.109349   79367 cri.go:89] found id: ""
	I0814 17:38:07.109412   79367 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0814 17:38:07.126888   79367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:38:07.138272   79367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:38:07.138293   79367 kubeadm.go:157] found existing configuration files:
	
	I0814 17:38:07.138367   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:38:07.147160   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:38:07.147220   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:38:07.156664   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:38:07.165122   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:38:07.165167   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:38:07.173478   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:38:07.181391   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:38:07.181449   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:38:07.189750   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:38:07.198215   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:38:07.198274   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:38:07.207384   79367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:38:07.216034   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:07.337710   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.227720   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.455979   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.521250   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:08.654574   79367 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:38:08.654684   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.155639   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.655182   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:09.696193   79367 api_server.go:72] duration metric: took 1.041620068s to wait for apiserver process to appear ...
	I0814 17:38:09.696223   79367 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:38:09.696241   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:09.696703   79367 api_server.go:269] stopped: https://192.168.39.162:8443/healthz: Get "https://192.168.39.162:8443/healthz": dial tcp 192.168.39.162:8443: connect: connection refused
	I0814 17:38:10.197180   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.389673   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:38:12.389702   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:38:12.389717   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.403106   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0814 17:38:12.403138   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0814 17:38:12.696486   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:12.700755   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:38:12.700784   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:38:13.196293   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:13.200564   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0814 17:38:13.200592   79367 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0814 17:38:13.697253   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:38:13.705430   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I0814 17:38:13.732816   79367 api_server.go:141] control plane version: v1.31.0
	I0814 17:38:13.732843   79367 api_server.go:131] duration metric: took 4.036614106s to wait for apiserver health ...
	I0814 17:38:13.732852   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:38:13.732860   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:38:13.734904   79367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:38:11.444846   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:13.943583   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:12.759294   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:14.760919   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:11.640107   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.140034   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:12.639463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.139428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.639575   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.140005   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:14.639473   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.140124   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:15.639459   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:16.139187   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:13.736533   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:38:13.756650   79367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:38:13.776947   79367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:38:13.803170   79367 system_pods.go:59] 8 kube-system pods found
	I0814 17:38:13.803214   79367 system_pods.go:61] "coredns-6f6b679f8f-tt46z" [169beaf0-0310-47d5-b212-9a81c6b3df68] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0814 17:38:13.803228   79367 system_pods.go:61] "etcd-no-preload-545149" [47e22bb4-bedb-433f-ae2e-f281269c6e87] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0814 17:38:13.803240   79367 system_pods.go:61] "kube-apiserver-no-preload-545149" [37854a66-b05b-49fe-834b-98f724087ec1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0814 17:38:13.803249   79367 system_pods.go:61] "kube-controller-manager-no-preload-545149" [69189ec1-6f8c-4613-bf47-46e101a14ecd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0814 17:38:13.803307   79367 system_pods.go:61] "kube-proxy-gfrqp" [2206243d-f6e0-462c-969c-60e192038700] Running
	I0814 17:38:13.803314   79367 system_pods.go:61] "kube-scheduler-no-preload-545149" [0bbd41bd-0a18-486b-b78c-9a0e9efe209a] Running
	I0814 17:38:13.803322   79367 system_pods.go:61] "metrics-server-6867b74b74-8c2cx" [b30f3018-f316-4997-a8fa-ff6c83aa7dd7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:38:13.803341   79367 system_pods.go:61] "storage-provisioner" [635027cc-ac5d-4474-a243-ef48b3580998] Running
	I0814 17:38:13.803349   79367 system_pods.go:74] duration metric: took 26.377795ms to wait for pod list to return data ...
	I0814 17:38:13.803357   79367 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:38:13.814093   79367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:38:13.814120   79367 node_conditions.go:123] node cpu capacity is 2
	I0814 17:38:13.814131   79367 node_conditions.go:105] duration metric: took 10.768606ms to run NodePressure ...
	I0814 17:38:13.814147   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0814 17:38:14.196481   79367 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0814 17:38:14.202205   79367 kubeadm.go:739] kubelet initialised
	I0814 17:38:14.202239   79367 kubeadm.go:740] duration metric: took 5.723699ms waiting for restarted kubelet to initialise ...
	I0814 17:38:14.202250   79367 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:38:14.209431   79367 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.215568   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.215597   79367 pod_ready.go:81] duration metric: took 6.13175ms for pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.215610   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "coredns-6f6b679f8f-tt46z" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.215620   79367 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.227611   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "etcd-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.227647   79367 pod_ready.go:81] duration metric: took 12.016107ms for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.227661   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "etcd-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.227669   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.235095   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "kube-apiserver-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.235130   79367 pod_ready.go:81] duration metric: took 7.452418ms for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.235143   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "kube-apiserver-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.235153   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.244417   79367 pod_ready.go:97] node "no-preload-545149" hosting pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.244447   79367 pod_ready.go:81] duration metric: took 9.283911ms for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	E0814 17:38:14.244459   79367 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-545149" hosting pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-545149" has status "Ready":"False"
	I0814 17:38:14.244466   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-gfrqp" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.999946   79367 pod_ready.go:92] pod "kube-proxy-gfrqp" in "kube-system" namespace has status "Ready":"True"
	I0814 17:38:14.999968   79367 pod_ready.go:81] duration metric: took 755.491905ms for pod "kube-proxy-gfrqp" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:14.999977   79367 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:15.945421   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:18.444758   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:16.761265   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:19.260117   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:16.639219   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.139463   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.639839   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.140251   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:18.639890   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.139999   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:19.639652   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.139316   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:20.639809   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:21.139471   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:17.005796   79367 pod_ready.go:102] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:19.006769   79367 pod_ready.go:102] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:20.506792   79367 pod_ready.go:92] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:38:20.506815   79367 pod_ready.go:81] duration metric: took 5.50683258s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:20.506825   79367 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" ...
	I0814 17:38:20.445449   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:22.446622   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:24.943859   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:21.760870   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:23.761708   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:21.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.139292   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.640151   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.139450   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:23.639996   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.139447   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:24.639267   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.139595   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:25.639451   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:26.140190   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:22.513577   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:25.012936   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.945216   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:29.444769   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.260276   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:28.263789   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:26.640120   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.140141   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.640184   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.139896   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:28.640066   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.140246   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:29.639895   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.139860   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:30.639358   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:31.140029   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:27.014354   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:29.516049   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:31.944967   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:34.444885   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:30.760174   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:33.259870   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:35.260137   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:31.639317   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.140039   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.640118   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.139240   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:33.640181   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.139789   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:34.639297   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.139871   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:35.639347   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:36.140044   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:32.013464   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:34.513348   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.513741   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.944347   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:38.945374   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:37.759866   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:39.760334   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:36.640132   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.139254   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:37.639457   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.139928   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:38.639196   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:39.139906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:39.139980   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:39.179494   80228 cri.go:89] found id: ""
	I0814 17:38:39.179524   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.179535   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:39.179543   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:39.179619   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:39.210704   80228 cri.go:89] found id: ""
	I0814 17:38:39.210732   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.210741   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:39.210746   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:39.210796   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:39.247562   80228 cri.go:89] found id: ""
	I0814 17:38:39.247590   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.247597   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:39.247603   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:39.247665   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:39.281456   80228 cri.go:89] found id: ""
	I0814 17:38:39.281480   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.281488   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:39.281494   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:39.281553   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:39.318588   80228 cri.go:89] found id: ""
	I0814 17:38:39.318620   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.318630   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:39.318638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:39.318695   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:39.350270   80228 cri.go:89] found id: ""
	I0814 17:38:39.350294   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.350303   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:39.350311   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:39.350387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:39.382168   80228 cri.go:89] found id: ""
	I0814 17:38:39.382198   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.382209   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:39.382215   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:39.382325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:39.415307   80228 cri.go:89] found id: ""
	I0814 17:38:39.415342   80228 logs.go:276] 0 containers: []
	W0814 17:38:39.415354   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:39.415375   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:39.415388   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:39.469591   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:39.469632   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:39.482909   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:39.482942   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:39.609874   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:39.609906   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:39.609921   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:39.683210   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:39.683253   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:39.013876   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:41.513527   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:41.444286   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:43.444539   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:42.260548   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:44.263171   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:42.222687   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:42.235017   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:42.235088   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:42.285518   80228 cri.go:89] found id: ""
	I0814 17:38:42.285544   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.285553   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:42.285559   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:42.285614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:42.320462   80228 cri.go:89] found id: ""
	I0814 17:38:42.320492   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.320500   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:42.320506   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:42.320594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:42.353484   80228 cri.go:89] found id: ""
	I0814 17:38:42.353515   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.353528   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:42.353537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:42.353614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:42.388122   80228 cri.go:89] found id: ""
	I0814 17:38:42.388152   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.388163   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:42.388171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:42.388239   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:42.420246   80228 cri.go:89] found id: ""
	I0814 17:38:42.420275   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.420285   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:42.420293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:42.420359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:42.454636   80228 cri.go:89] found id: ""
	I0814 17:38:42.454669   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.454680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:42.454687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:42.454749   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:42.494638   80228 cri.go:89] found id: ""
	I0814 17:38:42.494670   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.494679   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:42.494686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:42.494751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:42.532224   80228 cri.go:89] found id: ""
	I0814 17:38:42.532257   80228 logs.go:276] 0 containers: []
	W0814 17:38:42.532269   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:42.532281   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:42.532296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:42.546100   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:42.546132   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:42.616561   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:42.616589   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:42.616604   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:42.697269   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:42.697305   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:42.737787   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:42.737821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.293788   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:45.309020   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:45.309080   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:45.349218   80228 cri.go:89] found id: ""
	I0814 17:38:45.349246   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.349254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:45.349260   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:45.349318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:45.387622   80228 cri.go:89] found id: ""
	I0814 17:38:45.387653   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.387664   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:45.387672   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:45.387750   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:45.422120   80228 cri.go:89] found id: ""
	I0814 17:38:45.422154   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.422164   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:45.422169   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:45.422226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:45.457309   80228 cri.go:89] found id: ""
	I0814 17:38:45.457337   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.457352   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:45.457361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:45.457412   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:45.488969   80228 cri.go:89] found id: ""
	I0814 17:38:45.489000   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.489011   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:45.489019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:45.489081   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:45.522230   80228 cri.go:89] found id: ""
	I0814 17:38:45.522258   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.522273   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:45.522280   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:45.522345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:45.555394   80228 cri.go:89] found id: ""
	I0814 17:38:45.555425   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.555440   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:45.555448   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:45.555520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:45.587870   80228 cri.go:89] found id: ""
	I0814 17:38:45.587899   80228 logs.go:276] 0 containers: []
	W0814 17:38:45.587910   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:45.587934   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:45.587951   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:45.638662   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:45.638709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:45.652217   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:45.652248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:45.733611   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:45.733635   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:45.733648   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:45.822733   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:45.822773   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:44.013405   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:46.014164   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:45.445289   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:47.944672   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:46.760279   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:49.260108   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:48.361519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:48.374848   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:48.374916   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:48.410849   80228 cri.go:89] found id: ""
	I0814 17:38:48.410897   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.410911   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:48.410920   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:48.410986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:48.448507   80228 cri.go:89] found id: ""
	I0814 17:38:48.448530   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.448537   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:48.448543   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:48.448594   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:48.486257   80228 cri.go:89] found id: ""
	I0814 17:38:48.486298   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.486306   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:48.486312   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:48.486363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:48.520447   80228 cri.go:89] found id: ""
	I0814 17:38:48.520473   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.520482   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:48.520487   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:48.520544   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:48.552659   80228 cri.go:89] found id: ""
	I0814 17:38:48.552690   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.552698   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:48.552704   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:48.552768   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:48.585302   80228 cri.go:89] found id: ""
	I0814 17:38:48.585331   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.585341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:48.585348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:48.585415   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:48.617388   80228 cri.go:89] found id: ""
	I0814 17:38:48.617417   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.617428   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:48.617435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:48.617503   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:48.658987   80228 cri.go:89] found id: ""
	I0814 17:38:48.659012   80228 logs.go:276] 0 containers: []
	W0814 17:38:48.659019   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:48.659027   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:48.659041   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:48.719882   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:48.719915   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:48.738962   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:48.738994   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:48.807703   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:48.807727   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:48.807739   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:48.886555   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:48.886585   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:48.514199   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.013627   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:50.444135   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:52.444957   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:54.446434   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.760518   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:54.260283   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:51.423653   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:51.436700   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:51.436792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:51.473198   80228 cri.go:89] found id: ""
	I0814 17:38:51.473227   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.473256   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:51.473262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:51.473311   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:51.508631   80228 cri.go:89] found id: ""
	I0814 17:38:51.508664   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.508675   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:51.508682   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:51.508743   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:51.540917   80228 cri.go:89] found id: ""
	I0814 17:38:51.540950   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.540958   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:51.540965   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:51.541014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:51.578112   80228 cri.go:89] found id: ""
	I0814 17:38:51.578140   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.578150   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:51.578158   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:51.578220   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:51.612664   80228 cri.go:89] found id: ""
	I0814 17:38:51.612692   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.612700   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:51.612706   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:51.612756   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:51.646374   80228 cri.go:89] found id: ""
	I0814 17:38:51.646399   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.646407   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:51.646413   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:51.646463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:51.682052   80228 cri.go:89] found id: ""
	I0814 17:38:51.682081   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.682092   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:51.682098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:51.682147   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:51.722625   80228 cri.go:89] found id: ""
	I0814 17:38:51.722653   80228 logs.go:276] 0 containers: []
	W0814 17:38:51.722663   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:51.722674   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:51.722687   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:51.771788   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:51.771818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:51.785403   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:51.785432   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:51.854249   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:51.854269   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:51.854281   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:51.938121   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:51.938155   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.475672   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:54.491309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:54.491399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:54.524971   80228 cri.go:89] found id: ""
	I0814 17:38:54.525001   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.525011   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:54.525023   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:54.525087   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:54.560631   80228 cri.go:89] found id: ""
	I0814 17:38:54.560661   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.560670   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:54.560675   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:54.560728   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:54.595710   80228 cri.go:89] found id: ""
	I0814 17:38:54.595740   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.595751   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:54.595759   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:54.595824   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:54.631449   80228 cri.go:89] found id: ""
	I0814 17:38:54.631476   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.631487   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:54.631495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:54.631557   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:54.666492   80228 cri.go:89] found id: ""
	I0814 17:38:54.666526   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.666539   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:54.666548   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:54.666617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:54.701111   80228 cri.go:89] found id: ""
	I0814 17:38:54.701146   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.701158   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:54.701166   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:54.701226   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:54.737535   80228 cri.go:89] found id: ""
	I0814 17:38:54.737574   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.737585   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:54.737595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:54.737653   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:54.771658   80228 cri.go:89] found id: ""
	I0814 17:38:54.771679   80228 logs.go:276] 0 containers: []
	W0814 17:38:54.771686   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:54.771694   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:54.771709   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:54.841798   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:54.841817   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:54.841829   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:54.930861   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:54.930917   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:54.970508   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:54.970540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:55.023077   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:55.023123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:53.513137   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.014005   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.945376   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:59.445437   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:56.260436   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:58.759613   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:38:57.538876   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:38:57.551796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:38:57.551868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:38:57.584576   80228 cri.go:89] found id: ""
	I0814 17:38:57.584601   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.584609   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:38:57.584617   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:38:57.584687   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:38:57.617209   80228 cri.go:89] found id: ""
	I0814 17:38:57.617239   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.617249   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:38:57.617257   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:38:57.617338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:38:57.650062   80228 cri.go:89] found id: ""
	I0814 17:38:57.650089   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.650096   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:38:57.650102   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:38:57.650160   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:38:57.681118   80228 cri.go:89] found id: ""
	I0814 17:38:57.681146   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.681154   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:38:57.681160   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:38:57.681228   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:38:57.713803   80228 cri.go:89] found id: ""
	I0814 17:38:57.713834   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.713842   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:38:57.713851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:38:57.713920   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:38:57.749555   80228 cri.go:89] found id: ""
	I0814 17:38:57.749594   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.749604   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:38:57.749613   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:38:57.749677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:38:57.782714   80228 cri.go:89] found id: ""
	I0814 17:38:57.782744   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.782755   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:38:57.782763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:38:57.782826   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:38:57.815386   80228 cri.go:89] found id: ""
	I0814 17:38:57.815414   80228 logs.go:276] 0 containers: []
	W0814 17:38:57.815423   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:38:57.815436   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:38:57.815450   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:38:57.868153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:38:57.868183   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:38:57.881022   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:38:57.881053   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:38:57.950474   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:38:57.950501   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:38:57.950515   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:38:58.032611   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:38:58.032644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:00.569493   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:00.583257   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:00.583384   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:00.614680   80228 cri.go:89] found id: ""
	I0814 17:39:00.614712   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.614723   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:00.614732   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:00.614792   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:00.648161   80228 cri.go:89] found id: ""
	I0814 17:39:00.648189   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.648196   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:00.648203   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:00.648256   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:00.681844   80228 cri.go:89] found id: ""
	I0814 17:39:00.681872   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.681883   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:00.681890   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:00.681952   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:00.714773   80228 cri.go:89] found id: ""
	I0814 17:39:00.714804   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.714815   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:00.714823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:00.714891   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:00.747748   80228 cri.go:89] found id: ""
	I0814 17:39:00.747774   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.747781   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:00.747787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:00.747845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:00.783135   80228 cri.go:89] found id: ""
	I0814 17:39:00.783168   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.783179   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:00.783186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:00.783242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:00.817505   80228 cri.go:89] found id: ""
	I0814 17:39:00.817541   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.817552   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:00.817567   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:00.817633   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:00.849205   80228 cri.go:89] found id: ""
	I0814 17:39:00.849231   80228 logs.go:276] 0 containers: []
	W0814 17:39:00.849241   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:00.849252   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:00.849273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:00.902529   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:00.902567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:00.916313   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:00.916346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:00.988708   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:00.988725   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:00.988737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:01.063818   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:01.063853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:38:58.512313   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:00.513694   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:01.944987   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.945640   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:00.759979   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.259928   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:03.603241   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:03.616400   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:03.616504   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:03.649580   80228 cri.go:89] found id: ""
	I0814 17:39:03.649619   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.649637   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:03.649650   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:03.649718   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:03.686252   80228 cri.go:89] found id: ""
	I0814 17:39:03.686274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.686282   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:03.686289   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:03.686349   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:03.720995   80228 cri.go:89] found id: ""
	I0814 17:39:03.721024   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.721036   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:03.721043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:03.721094   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:03.753466   80228 cri.go:89] found id: ""
	I0814 17:39:03.753491   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.753500   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:03.753506   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:03.753554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:03.794427   80228 cri.go:89] found id: ""
	I0814 17:39:03.794450   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.794458   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:03.794464   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:03.794524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:03.826245   80228 cri.go:89] found id: ""
	I0814 17:39:03.826274   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.826282   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:03.826288   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:03.826355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:03.857208   80228 cri.go:89] found id: ""
	I0814 17:39:03.857238   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.857247   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:03.857253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:03.857325   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:03.892840   80228 cri.go:89] found id: ""
	I0814 17:39:03.892864   80228 logs.go:276] 0 containers: []
	W0814 17:39:03.892871   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:03.892879   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:03.892891   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:03.948554   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:03.948579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:03.962222   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:03.962249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:04.031833   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:04.031859   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:04.031875   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:04.109572   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:04.109636   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:03.013542   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:05.513201   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:06.444222   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:08.444833   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:05.759653   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:07.760063   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.259961   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:06.646923   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:06.659699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:06.659757   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:06.691918   80228 cri.go:89] found id: ""
	I0814 17:39:06.691941   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.691951   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:06.691958   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:06.692016   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:06.722675   80228 cri.go:89] found id: ""
	I0814 17:39:06.722703   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.722713   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:06.722720   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:06.722782   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:06.757215   80228 cri.go:89] found id: ""
	I0814 17:39:06.757248   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.757259   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:06.757266   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:06.757333   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:06.791337   80228 cri.go:89] found id: ""
	I0814 17:39:06.791370   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.791378   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:06.791384   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:06.791440   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:06.825182   80228 cri.go:89] found id: ""
	I0814 17:39:06.825209   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.825220   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:06.825234   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:06.825288   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:06.857473   80228 cri.go:89] found id: ""
	I0814 17:39:06.857498   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.857507   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:06.857514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:06.857582   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:06.891293   80228 cri.go:89] found id: ""
	I0814 17:39:06.891343   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.891355   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:06.891363   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:06.891421   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:06.927476   80228 cri.go:89] found id: ""
	I0814 17:39:06.927505   80228 logs.go:276] 0 containers: []
	W0814 17:39:06.927516   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:06.927527   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:06.927541   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:06.980604   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:06.980635   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:06.994648   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:06.994678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:07.072554   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:07.072580   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:07.072599   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:07.153141   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:07.153182   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:09.693348   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:09.705754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:09.705814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:09.739674   80228 cri.go:89] found id: ""
	I0814 17:39:09.739706   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.739717   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:09.739724   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:09.739788   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:09.774381   80228 cri.go:89] found id: ""
	I0814 17:39:09.774405   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.774413   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:09.774420   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:09.774478   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:09.806586   80228 cri.go:89] found id: ""
	I0814 17:39:09.806614   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.806623   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:09.806629   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:09.806696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:09.839564   80228 cri.go:89] found id: ""
	I0814 17:39:09.839594   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.839602   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:09.839614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:09.839672   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:09.872338   80228 cri.go:89] found id: ""
	I0814 17:39:09.872373   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.872385   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:09.872393   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:09.872457   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:09.904184   80228 cri.go:89] found id: ""
	I0814 17:39:09.904223   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.904231   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:09.904253   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:09.904312   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:09.937217   80228 cri.go:89] found id: ""
	I0814 17:39:09.937242   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.937251   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:09.937259   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:09.937322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:09.972273   80228 cri.go:89] found id: ""
	I0814 17:39:09.972301   80228 logs.go:276] 0 containers: []
	W0814 17:39:09.972313   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:09.972325   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:09.972341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:10.023736   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:10.023764   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:10.036654   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:10.036681   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:10.104031   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:10.104052   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:10.104068   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:10.187816   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:10.187853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:08.013632   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.513090   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:10.944491   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.945542   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.260129   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:14.759867   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:12.727237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:12.741970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:12.742041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:12.778721   80228 cri.go:89] found id: ""
	I0814 17:39:12.778748   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.778758   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:12.778765   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:12.778820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:12.812575   80228 cri.go:89] found id: ""
	I0814 17:39:12.812603   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.812610   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:12.812619   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:12.812678   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:12.845697   80228 cri.go:89] found id: ""
	I0814 17:39:12.845726   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.845737   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:12.845744   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:12.845809   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:12.879491   80228 cri.go:89] found id: ""
	I0814 17:39:12.879518   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.879529   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:12.879536   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:12.879604   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:12.912321   80228 cri.go:89] found id: ""
	I0814 17:39:12.912348   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.912356   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:12.912361   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:12.912410   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:12.948866   80228 cri.go:89] found id: ""
	I0814 17:39:12.948889   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.948897   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:12.948903   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:12.948963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:12.983394   80228 cri.go:89] found id: ""
	I0814 17:39:12.983444   80228 logs.go:276] 0 containers: []
	W0814 17:39:12.983459   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:12.983466   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:12.983530   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:13.018406   80228 cri.go:89] found id: ""
	I0814 17:39:13.018427   80228 logs.go:276] 0 containers: []
	W0814 17:39:13.018434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:13.018442   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:13.018457   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.069615   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:13.069655   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:13.082618   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:13.082651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:13.145033   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:13.145054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:13.145067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:13.225074   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:13.225108   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:15.765512   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:15.778320   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:15.778380   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:15.812847   80228 cri.go:89] found id: ""
	I0814 17:39:15.812876   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.812885   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:15.812896   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:15.812944   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:15.845131   80228 cri.go:89] found id: ""
	I0814 17:39:15.845159   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.845169   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:15.845176   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:15.845242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:15.879763   80228 cri.go:89] found id: ""
	I0814 17:39:15.879789   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.879799   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:15.879807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:15.879864   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:15.912746   80228 cri.go:89] found id: ""
	I0814 17:39:15.912776   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.912784   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:15.912797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:15.912858   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:15.946433   80228 cri.go:89] found id: ""
	I0814 17:39:15.946456   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.946465   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:15.946473   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:15.946534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:15.980060   80228 cri.go:89] found id: ""
	I0814 17:39:15.980086   80228 logs.go:276] 0 containers: []
	W0814 17:39:15.980096   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:15.980103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:15.980167   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:16.011539   80228 cri.go:89] found id: ""
	I0814 17:39:16.011570   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.011581   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:16.011590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:16.011660   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:16.046019   80228 cri.go:89] found id: ""
	I0814 17:39:16.046046   80228 logs.go:276] 0 containers: []
	W0814 17:39:16.046057   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:16.046068   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:16.046083   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:16.058442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:16.058470   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:16.132775   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:16.132799   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:16.132811   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:16.218360   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:16.218398   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:16.258070   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:16.258096   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:13.013275   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:15.013967   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:15.444280   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:17.444827   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.943845   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:16.760773   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.259891   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:18.813127   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:18.826187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:18.826267   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:18.858405   80228 cri.go:89] found id: ""
	I0814 17:39:18.858433   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.858444   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:18.858452   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:18.858524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:18.893302   80228 cri.go:89] found id: ""
	I0814 17:39:18.893335   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.893342   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:18.893350   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:18.893417   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:18.929885   80228 cri.go:89] found id: ""
	I0814 17:39:18.929919   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.929929   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:18.929937   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:18.930000   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:18.966758   80228 cri.go:89] found id: ""
	I0814 17:39:18.966783   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.966792   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:18.966799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:18.966861   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:18.999815   80228 cri.go:89] found id: ""
	I0814 17:39:18.999838   80228 logs.go:276] 0 containers: []
	W0814 17:39:18.999845   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:18.999851   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:18.999915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:19.033737   80228 cri.go:89] found id: ""
	I0814 17:39:19.033761   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.033768   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:19.033774   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:19.033830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:19.070080   80228 cri.go:89] found id: ""
	I0814 17:39:19.070105   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.070113   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:19.070119   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:19.070190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:19.102868   80228 cri.go:89] found id: ""
	I0814 17:39:19.102897   80228 logs.go:276] 0 containers: []
	W0814 17:39:19.102907   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:19.102918   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:19.102932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:19.156525   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:19.156569   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:19.170193   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:19.170225   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:19.236521   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:19.236547   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:19.236561   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:19.315984   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:19.316024   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:17.512553   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:19.513046   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.513082   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:22.444948   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:24.945111   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.260362   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:23.260567   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:21.855554   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:21.868457   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:21.868527   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:21.902098   80228 cri.go:89] found id: ""
	I0814 17:39:21.902124   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.902132   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:21.902139   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:21.902200   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:21.934876   80228 cri.go:89] found id: ""
	I0814 17:39:21.934908   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.934919   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:21.934926   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:21.934987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:21.976507   80228 cri.go:89] found id: ""
	I0814 17:39:21.976536   80228 logs.go:276] 0 containers: []
	W0814 17:39:21.976548   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:21.976555   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:21.976617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:22.013876   80228 cri.go:89] found id: ""
	I0814 17:39:22.013897   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.013904   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:22.013909   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:22.013955   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:22.051943   80228 cri.go:89] found id: ""
	I0814 17:39:22.051969   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.051979   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:22.051999   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:22.052064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:22.084702   80228 cri.go:89] found id: ""
	I0814 17:39:22.084725   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.084733   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:22.084738   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:22.084784   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:22.117397   80228 cri.go:89] found id: ""
	I0814 17:39:22.117424   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.117432   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:22.117439   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:22.117490   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:22.154139   80228 cri.go:89] found id: ""
	I0814 17:39:22.154168   80228 logs.go:276] 0 containers: []
	W0814 17:39:22.154178   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:22.154189   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:22.154201   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:22.205550   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:22.205580   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:22.219644   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:22.219679   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:22.288934   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:22.288957   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:22.288969   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:22.372917   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:22.372954   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:24.912578   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:24.925365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:24.925430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:24.961207   80228 cri.go:89] found id: ""
	I0814 17:39:24.961234   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.961248   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:24.961257   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:24.961339   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:24.998878   80228 cri.go:89] found id: ""
	I0814 17:39:24.998904   80228 logs.go:276] 0 containers: []
	W0814 17:39:24.998911   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:24.998918   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:24.998971   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:25.034141   80228 cri.go:89] found id: ""
	I0814 17:39:25.034174   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.034187   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:25.034196   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:25.034274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:25.075634   80228 cri.go:89] found id: ""
	I0814 17:39:25.075667   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.075679   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:25.075688   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:25.075759   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:25.112890   80228 cri.go:89] found id: ""
	I0814 17:39:25.112929   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.112939   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:25.112946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:25.113007   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:25.152887   80228 cri.go:89] found id: ""
	I0814 17:39:25.152913   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.152921   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:25.152927   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:25.152987   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:25.186421   80228 cri.go:89] found id: ""
	I0814 17:39:25.186452   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.186463   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:25.186471   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:25.186537   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:25.220390   80228 cri.go:89] found id: ""
	I0814 17:39:25.220417   80228 logs.go:276] 0 containers: []
	W0814 17:39:25.220425   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:25.220432   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:25.220446   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:25.296112   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:25.296146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:25.335421   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:25.335449   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:25.387690   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:25.387718   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:25.401926   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:25.401957   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:25.471111   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:24.012534   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:26.513529   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.445280   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:29.445416   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:25.759098   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.759924   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:30.259610   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:27.972237   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:27.985512   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:27.985575   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:28.019454   80228 cri.go:89] found id: ""
	I0814 17:39:28.019482   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.019493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:28.019502   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:28.019566   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:28.056908   80228 cri.go:89] found id: ""
	I0814 17:39:28.056931   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.056939   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:28.056944   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:28.056998   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:28.090678   80228 cri.go:89] found id: ""
	I0814 17:39:28.090707   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.090715   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:28.090721   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:28.090785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:28.125557   80228 cri.go:89] found id: ""
	I0814 17:39:28.125591   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.125609   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:28.125620   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:28.125682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:28.158092   80228 cri.go:89] found id: ""
	I0814 17:39:28.158121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.158129   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:28.158135   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:28.158191   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:28.193403   80228 cri.go:89] found id: ""
	I0814 17:39:28.193434   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.193445   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:28.193454   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:28.193524   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:28.231095   80228 cri.go:89] found id: ""
	I0814 17:39:28.231121   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.231131   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:28.231139   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:28.231203   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:28.280157   80228 cri.go:89] found id: ""
	I0814 17:39:28.280185   80228 logs.go:276] 0 containers: []
	W0814 17:39:28.280196   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:28.280207   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:28.280220   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:28.352877   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:28.352894   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:28.352906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:28.439692   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:28.439736   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:28.479986   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:28.480012   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:28.538454   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:28.538493   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.052941   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:31.065810   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:31.065879   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:31.097988   80228 cri.go:89] found id: ""
	I0814 17:39:31.098013   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.098020   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:31.098045   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:31.098102   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:31.130126   80228 cri.go:89] found id: ""
	I0814 17:39:31.130152   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.130160   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:31.130166   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:31.130225   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:31.165945   80228 cri.go:89] found id: ""
	I0814 17:39:31.165984   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.165995   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:31.166003   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:31.166064   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:31.199749   80228 cri.go:89] found id: ""
	I0814 17:39:31.199772   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.199778   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:31.199784   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:31.199843   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:31.231398   80228 cri.go:89] found id: ""
	I0814 17:39:31.231425   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.231436   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:31.231444   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:31.231528   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:31.263842   80228 cri.go:89] found id: ""
	I0814 17:39:31.263868   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.263878   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:31.263885   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:31.263950   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:31.299258   80228 cri.go:89] found id: ""
	I0814 17:39:31.299289   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.299301   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:31.299309   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:31.299399   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:29.013468   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.013638   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.445769   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:33.944939   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:32.260117   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:34.262303   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:31.332626   80228 cri.go:89] found id: ""
	I0814 17:39:31.332649   80228 logs.go:276] 0 containers: []
	W0814 17:39:31.332657   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:31.332666   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:31.332678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:31.369262   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:31.369288   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:31.426003   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:31.426034   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:31.439583   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:31.439611   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:31.507538   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:31.507563   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:31.507583   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.085342   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:34.097491   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:34.097567   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:34.129220   80228 cri.go:89] found id: ""
	I0814 17:39:34.129244   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.129254   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:34.129262   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:34.129322   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:34.161233   80228 cri.go:89] found id: ""
	I0814 17:39:34.161256   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.161264   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:34.161270   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:34.161334   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:34.193649   80228 cri.go:89] found id: ""
	I0814 17:39:34.193675   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.193683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:34.193689   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:34.193754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:34.226722   80228 cri.go:89] found id: ""
	I0814 17:39:34.226753   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.226763   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:34.226772   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:34.226842   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:34.259735   80228 cri.go:89] found id: ""
	I0814 17:39:34.259761   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.259774   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:34.259787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:34.259850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:34.296804   80228 cri.go:89] found id: ""
	I0814 17:39:34.296830   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.296838   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:34.296844   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:34.296894   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:34.328942   80228 cri.go:89] found id: ""
	I0814 17:39:34.328973   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.328982   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:34.328988   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:34.329041   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:34.360820   80228 cri.go:89] found id: ""
	I0814 17:39:34.360847   80228 logs.go:276] 0 containers: []
	W0814 17:39:34.360858   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:34.360868   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:34.360882   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:34.411106   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:34.411142   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:34.424737   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:34.424769   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:34.489094   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:34.489122   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:34.489138   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:34.569783   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:34.569818   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:33.015308   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:35.513073   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:35.945264   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:38.444913   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:36.760740   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:39.260499   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:37.107492   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:37.120829   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:37.120901   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:37.154556   80228 cri.go:89] found id: ""
	I0814 17:39:37.154589   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.154601   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:37.154609   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:37.154673   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:37.192570   80228 cri.go:89] found id: ""
	I0814 17:39:37.192602   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.192609   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:37.192615   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:37.192679   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:37.225845   80228 cri.go:89] found id: ""
	I0814 17:39:37.225891   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.225902   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:37.225917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:37.225986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:37.262370   80228 cri.go:89] found id: ""
	I0814 17:39:37.262399   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.262408   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:37.262416   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:37.262481   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:37.297642   80228 cri.go:89] found id: ""
	I0814 17:39:37.297669   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.297680   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:37.297687   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:37.297754   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:37.331006   80228 cri.go:89] found id: ""
	I0814 17:39:37.331032   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.331041   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:37.331046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:37.331111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:37.364753   80228 cri.go:89] found id: ""
	I0814 17:39:37.364777   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.364786   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:37.364792   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:37.364850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:37.397722   80228 cri.go:89] found id: ""
	I0814 17:39:37.397748   80228 logs.go:276] 0 containers: []
	W0814 17:39:37.397760   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:37.397770   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:37.397785   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:37.473616   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:37.473643   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:37.473659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:37.557672   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:37.557710   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.596337   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:37.596368   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:37.646815   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:37.646853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.160391   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:40.174099   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:40.174181   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:40.208783   80228 cri.go:89] found id: ""
	I0814 17:39:40.208814   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.208821   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:40.208829   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:40.208880   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:40.243555   80228 cri.go:89] found id: ""
	I0814 17:39:40.243580   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.243588   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:40.243594   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:40.243661   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:40.276685   80228 cri.go:89] found id: ""
	I0814 17:39:40.276711   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.276723   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:40.276731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:40.276795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:40.309893   80228 cri.go:89] found id: ""
	I0814 17:39:40.309925   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.309937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:40.309944   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:40.310073   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:40.341724   80228 cri.go:89] found id: ""
	I0814 17:39:40.341751   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.341762   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:40.341770   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:40.341834   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:40.376442   80228 cri.go:89] found id: ""
	I0814 17:39:40.376478   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.376487   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:40.376495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:40.376558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:40.419240   80228 cri.go:89] found id: ""
	I0814 17:39:40.419269   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.419277   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:40.419284   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:40.419374   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:40.464678   80228 cri.go:89] found id: ""
	I0814 17:39:40.464703   80228 logs.go:276] 0 containers: []
	W0814 17:39:40.464712   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:40.464721   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:40.464737   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:40.531138   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:40.531175   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:40.546809   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:40.546842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:40.618791   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:40.618809   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:40.618821   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:40.706169   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:40.706219   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:37.513604   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:40.013349   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:40.445989   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:42.944417   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:41.261429   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:43.760436   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:43.250987   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:43.266109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:43.266179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:43.301860   80228 cri.go:89] found id: ""
	I0814 17:39:43.301891   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.301899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:43.301908   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:43.301991   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:43.337166   80228 cri.go:89] found id: ""
	I0814 17:39:43.337195   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.337205   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:43.337212   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:43.337262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:43.370640   80228 cri.go:89] found id: ""
	I0814 17:39:43.370671   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.370683   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:43.370696   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:43.370752   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:43.405598   80228 cri.go:89] found id: ""
	I0814 17:39:43.405624   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.405632   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:43.405638   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:43.405705   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:43.437161   80228 cri.go:89] found id: ""
	I0814 17:39:43.437184   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.437192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:43.437198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:43.437295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:43.470675   80228 cri.go:89] found id: ""
	I0814 17:39:43.470707   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.470718   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:43.470726   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:43.470787   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:43.503036   80228 cri.go:89] found id: ""
	I0814 17:39:43.503062   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.503073   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:43.503081   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:43.503149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:43.538269   80228 cri.go:89] found id: ""
	I0814 17:39:43.538296   80228 logs.go:276] 0 containers: []
	W0814 17:39:43.538304   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:43.538328   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:43.538340   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:43.621889   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:43.621936   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:43.667460   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:43.667491   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:43.723630   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:43.723663   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:43.738905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:43.738939   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:43.805484   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:46.306031   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:42.512438   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:44.513112   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.513203   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:45.445470   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:47.944790   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.260236   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:48.260662   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:46.324624   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:46.324696   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:46.360039   80228 cri.go:89] found id: ""
	I0814 17:39:46.360066   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.360074   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:46.360082   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:46.360131   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:46.413735   80228 cri.go:89] found id: ""
	I0814 17:39:46.413767   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.413779   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:46.413788   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:46.413876   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:46.458823   80228 cri.go:89] found id: ""
	I0814 17:39:46.458851   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.458861   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:46.458869   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:46.458928   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:46.495347   80228 cri.go:89] found id: ""
	I0814 17:39:46.495378   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.495387   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:46.495392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:46.495441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:46.531502   80228 cri.go:89] found id: ""
	I0814 17:39:46.531533   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.531545   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:46.531554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:46.531624   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:46.564450   80228 cri.go:89] found id: ""
	I0814 17:39:46.564473   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.564482   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:46.564488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:46.564535   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:46.598293   80228 cri.go:89] found id: ""
	I0814 17:39:46.598401   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.598421   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:46.598431   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:46.598498   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:46.632370   80228 cri.go:89] found id: ""
	I0814 17:39:46.632400   80228 logs.go:276] 0 containers: []
	W0814 17:39:46.632411   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:46.632423   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:46.632438   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:46.711814   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:46.711848   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:46.749410   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:46.749443   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:46.801686   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:46.801720   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:46.815196   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:46.815218   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:46.885648   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.386223   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:49.399359   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:49.399430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:49.432133   80228 cri.go:89] found id: ""
	I0814 17:39:49.432168   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.432179   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:49.432186   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:49.432250   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:49.469760   80228 cri.go:89] found id: ""
	I0814 17:39:49.469790   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.469799   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:49.469811   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:49.469873   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:49.500437   80228 cri.go:89] found id: ""
	I0814 17:39:49.500466   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.500474   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:49.500481   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:49.500531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:49.533685   80228 cri.go:89] found id: ""
	I0814 17:39:49.533709   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.533717   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:49.533723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:49.533790   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:49.570551   80228 cri.go:89] found id: ""
	I0814 17:39:49.570577   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.570584   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:49.570590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:49.570654   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:49.606649   80228 cri.go:89] found id: ""
	I0814 17:39:49.606672   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.606680   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:49.606686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:49.606734   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:49.638060   80228 cri.go:89] found id: ""
	I0814 17:39:49.638090   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.638101   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:49.638109   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:49.638178   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:49.674503   80228 cri.go:89] found id: ""
	I0814 17:39:49.674526   80228 logs.go:276] 0 containers: []
	W0814 17:39:49.674534   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:49.674543   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:49.674563   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:49.710185   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:49.710213   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:49.764112   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:49.764146   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:49.777862   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:49.777888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:49.849786   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:49.849806   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:49.849819   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:48.513418   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:51.013242   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:50.444526   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.444788   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:54.944646   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:50.759890   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.760236   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:54.760324   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:52.429811   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:52.444364   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:52.444441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:52.483047   80228 cri.go:89] found id: ""
	I0814 17:39:52.483074   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.483085   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:52.483093   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:52.483157   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:52.520236   80228 cri.go:89] found id: ""
	I0814 17:39:52.520264   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.520274   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:52.520287   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:52.520353   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:52.553757   80228 cri.go:89] found id: ""
	I0814 17:39:52.553784   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.553795   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:52.553802   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:52.553869   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:52.588782   80228 cri.go:89] found id: ""
	I0814 17:39:52.588808   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.588818   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:52.588827   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:52.588893   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:52.620144   80228 cri.go:89] found id: ""
	I0814 17:39:52.620180   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.620192   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:52.620201   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:52.620274   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:52.652712   80228 cri.go:89] found id: ""
	I0814 17:39:52.652743   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.652755   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:52.652763   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:52.652825   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:52.687789   80228 cri.go:89] found id: ""
	I0814 17:39:52.687819   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.687831   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:52.687838   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:52.687892   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:52.718996   80228 cri.go:89] found id: ""
	I0814 17:39:52.719021   80228 logs.go:276] 0 containers: []
	W0814 17:39:52.719031   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:52.719041   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:52.719055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:52.775775   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:52.775808   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:52.789024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:52.789055   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:52.863320   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:52.863351   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:52.863366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:52.941533   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:52.941571   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:55.477833   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:55.490723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:55.490783   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:55.525816   80228 cri.go:89] found id: ""
	I0814 17:39:55.525844   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.525852   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:55.525859   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:55.525908   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:55.561855   80228 cri.go:89] found id: ""
	I0814 17:39:55.561878   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.561887   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:55.561892   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:55.561949   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:55.599997   80228 cri.go:89] found id: ""
	I0814 17:39:55.600027   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.600038   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:55.600046   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:55.600112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:55.632869   80228 cri.go:89] found id: ""
	I0814 17:39:55.632902   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.632914   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:55.632922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:55.632990   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:55.666029   80228 cri.go:89] found id: ""
	I0814 17:39:55.666055   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.666066   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:55.666079   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:55.666136   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:55.697222   80228 cri.go:89] found id: ""
	I0814 17:39:55.697247   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.697254   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:55.697260   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:55.697308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:55.729517   80228 cri.go:89] found id: ""
	I0814 17:39:55.729549   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.729561   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:55.729576   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:55.729640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:55.763890   80228 cri.go:89] found id: ""
	I0814 17:39:55.763922   80228 logs.go:276] 0 containers: []
	W0814 17:39:55.763934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:55.763944   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:55.763960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:55.819588   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:55.819624   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:55.833281   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:55.833314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:55.904610   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:55.904632   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:55.904644   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:55.981035   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:55.981069   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:53.513407   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:55.513734   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:56.945649   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:59.444937   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:57.259832   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:59.760669   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:39:58.522870   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:39:58.536151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:39:58.536224   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:39:58.568827   80228 cri.go:89] found id: ""
	I0814 17:39:58.568857   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.568869   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:39:58.568877   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:39:58.568946   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:39:58.600523   80228 cri.go:89] found id: ""
	I0814 17:39:58.600554   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.600564   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:39:58.600571   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:39:58.600640   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:39:58.634201   80228 cri.go:89] found id: ""
	I0814 17:39:58.634232   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.634240   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:39:58.634245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:39:58.634308   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:39:58.668746   80228 cri.go:89] found id: ""
	I0814 17:39:58.668772   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.668781   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:39:58.668787   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:39:58.668847   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:39:58.699695   80228 cri.go:89] found id: ""
	I0814 17:39:58.699727   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.699739   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:39:58.699752   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:39:58.699836   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:39:58.731047   80228 cri.go:89] found id: ""
	I0814 17:39:58.731081   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.731095   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:39:58.731103   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:39:58.731168   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:39:58.773454   80228 cri.go:89] found id: ""
	I0814 17:39:58.773486   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.773495   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:39:58.773501   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:39:58.773561   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:39:58.810135   80228 cri.go:89] found id: ""
	I0814 17:39:58.810159   80228 logs.go:276] 0 containers: []
	W0814 17:39:58.810166   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:39:58.810175   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:39:58.810191   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:39:58.844897   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:39:58.844925   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:39:58.901700   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:39:58.901745   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:39:58.914272   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:39:58.914296   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:39:58.984593   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:39:58.984610   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:39:58.984622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:39:57.513854   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:00.013241   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:01.945861   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.444575   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:02.262241   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.760164   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:01.563227   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:01.576764   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:01.576840   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:01.610842   80228 cri.go:89] found id: ""
	I0814 17:40:01.610871   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.610878   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:01.610884   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:01.610935   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:01.643774   80228 cri.go:89] found id: ""
	I0814 17:40:01.643806   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.643816   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:01.643824   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:01.643888   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:01.677867   80228 cri.go:89] found id: ""
	I0814 17:40:01.677892   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.677899   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:01.677906   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:01.677967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:01.712394   80228 cri.go:89] found id: ""
	I0814 17:40:01.712420   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.712427   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:01.712433   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:01.712492   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:01.745637   80228 cri.go:89] found id: ""
	I0814 17:40:01.745666   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.745676   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:01.745683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:01.745745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:01.782364   80228 cri.go:89] found id: ""
	I0814 17:40:01.782394   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.782404   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:01.782411   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:01.782484   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:01.814569   80228 cri.go:89] found id: ""
	I0814 17:40:01.814596   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.814605   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:01.814614   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:01.814674   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:01.850421   80228 cri.go:89] found id: ""
	I0814 17:40:01.850450   80228 logs.go:276] 0 containers: []
	W0814 17:40:01.850459   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:01.850468   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:01.850482   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:01.862965   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:01.863001   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:01.931312   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:01.931357   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:01.931375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:02.008236   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:02.008278   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:02.043238   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:02.043267   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:04.596909   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:04.610091   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:04.610158   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:04.645169   80228 cri.go:89] found id: ""
	I0814 17:40:04.645195   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.645205   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:04.645213   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:04.645279   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:04.677708   80228 cri.go:89] found id: ""
	I0814 17:40:04.677740   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.677750   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:04.677761   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:04.677823   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:04.710319   80228 cri.go:89] found id: ""
	I0814 17:40:04.710351   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.710362   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:04.710374   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:04.710443   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:04.745166   80228 cri.go:89] found id: ""
	I0814 17:40:04.745202   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.745219   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:04.745226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:04.745287   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:04.777307   80228 cri.go:89] found id: ""
	I0814 17:40:04.777354   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.777376   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:04.777383   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:04.777447   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:04.813854   80228 cri.go:89] found id: ""
	I0814 17:40:04.813886   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.813901   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:04.813908   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:04.813972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:04.848014   80228 cri.go:89] found id: ""
	I0814 17:40:04.848041   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.848049   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:04.848055   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:04.848113   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:04.882689   80228 cri.go:89] found id: ""
	I0814 17:40:04.882719   80228 logs.go:276] 0 containers: []
	W0814 17:40:04.882731   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:04.882742   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:04.882760   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:04.952074   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:04.952096   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:04.952112   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:05.030258   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:05.030300   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:05.066509   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:05.066542   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:05.120153   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:05.120195   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:02.512935   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:04.513254   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:06.445637   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:08.945142   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:06.760223   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:08.760857   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:07.634404   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:07.646900   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:07.646966   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:07.678654   80228 cri.go:89] found id: ""
	I0814 17:40:07.678680   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.678689   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:07.678696   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:07.678753   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:07.711355   80228 cri.go:89] found id: ""
	I0814 17:40:07.711381   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.711389   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:07.711395   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:07.711446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:07.744134   80228 cri.go:89] found id: ""
	I0814 17:40:07.744161   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.744169   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:07.744179   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:07.744242   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:07.776981   80228 cri.go:89] found id: ""
	I0814 17:40:07.777008   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.777015   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:07.777022   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:07.777086   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:07.811626   80228 cri.go:89] found id: ""
	I0814 17:40:07.811651   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.811661   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:07.811667   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:07.811720   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:07.843218   80228 cri.go:89] found id: ""
	I0814 17:40:07.843251   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.843262   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:07.843270   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:07.843355   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:07.875208   80228 cri.go:89] found id: ""
	I0814 17:40:07.875232   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.875239   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:07.875245   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:07.875295   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:07.907896   80228 cri.go:89] found id: ""
	I0814 17:40:07.907923   80228 logs.go:276] 0 containers: []
	W0814 17:40:07.907934   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:07.907945   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:07.907960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:07.959717   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:07.959753   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:07.973050   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:07.973081   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:08.035085   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:08.035107   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:08.035120   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:08.109722   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:08.109770   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:10.648203   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:10.661194   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:10.661280   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:10.698401   80228 cri.go:89] found id: ""
	I0814 17:40:10.698431   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.698442   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:10.698450   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:10.698515   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:10.730057   80228 cri.go:89] found id: ""
	I0814 17:40:10.730083   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.730094   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:10.730101   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:10.730163   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:10.768780   80228 cri.go:89] found id: ""
	I0814 17:40:10.768807   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.768817   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:10.768824   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:10.768885   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:10.800866   80228 cri.go:89] found id: ""
	I0814 17:40:10.800898   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.800907   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:10.800917   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:10.800984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:10.833741   80228 cri.go:89] found id: ""
	I0814 17:40:10.833771   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.833782   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:10.833789   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:10.833850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:10.865670   80228 cri.go:89] found id: ""
	I0814 17:40:10.865699   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.865706   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:10.865717   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:10.865770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:10.904726   80228 cri.go:89] found id: ""
	I0814 17:40:10.904757   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.904765   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:10.904771   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:10.904821   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:10.940549   80228 cri.go:89] found id: ""
	I0814 17:40:10.940578   80228 logs.go:276] 0 containers: []
	W0814 17:40:10.940588   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:10.940598   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:10.940620   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:10.992592   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:10.992622   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:11.006388   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:11.006412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:11.075455   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:11.075473   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:11.075486   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:11.156622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:11.156658   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:07.012878   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:09.013908   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.512592   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.444764   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.944931   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:11.259959   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.760823   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:13.695055   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:13.709460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:13.709531   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:13.741941   80228 cri.go:89] found id: ""
	I0814 17:40:13.741967   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.741975   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:13.741981   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:13.742042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:13.773916   80228 cri.go:89] found id: ""
	I0814 17:40:13.773940   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.773947   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:13.773952   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:13.773999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:13.807871   80228 cri.go:89] found id: ""
	I0814 17:40:13.807902   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.807912   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:13.807918   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:13.807981   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:13.840902   80228 cri.go:89] found id: ""
	I0814 17:40:13.840931   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.840943   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:13.840952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:13.841018   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:13.871969   80228 cri.go:89] found id: ""
	I0814 17:40:13.871998   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.872010   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:13.872019   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:13.872090   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:13.905502   80228 cri.go:89] found id: ""
	I0814 17:40:13.905524   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.905531   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:13.905537   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:13.905599   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:13.937356   80228 cri.go:89] found id: ""
	I0814 17:40:13.937386   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.937396   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:13.937404   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:13.937466   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:13.972383   80228 cri.go:89] found id: ""
	I0814 17:40:13.972410   80228 logs.go:276] 0 containers: []
	W0814 17:40:13.972418   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:13.972427   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:13.972448   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:14.022691   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:14.022717   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:14.035543   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:14.035567   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:14.104869   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:14.104889   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:14.104905   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:14.182185   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:14.182221   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:13.513519   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.012958   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:15.945499   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:18.445122   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.259488   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:18.259706   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.259972   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:16.720519   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:16.734323   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:16.734406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:16.769454   80228 cri.go:89] found id: ""
	I0814 17:40:16.769483   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.769493   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:16.769501   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:16.769565   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:16.801513   80228 cri.go:89] found id: ""
	I0814 17:40:16.801541   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.801548   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:16.801554   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:16.801610   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:16.835184   80228 cri.go:89] found id: ""
	I0814 17:40:16.835212   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.835220   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:16.835226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:16.835275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:16.867162   80228 cri.go:89] found id: ""
	I0814 17:40:16.867192   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.867201   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:16.867207   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:16.867257   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:16.902912   80228 cri.go:89] found id: ""
	I0814 17:40:16.902942   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.902953   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:16.902961   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:16.903026   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:16.935004   80228 cri.go:89] found id: ""
	I0814 17:40:16.935033   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.935044   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:16.935052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:16.935115   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:16.969082   80228 cri.go:89] found id: ""
	I0814 17:40:16.969110   80228 logs.go:276] 0 containers: []
	W0814 17:40:16.969120   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:16.969127   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:16.969194   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:17.002594   80228 cri.go:89] found id: ""
	I0814 17:40:17.002622   80228 logs.go:276] 0 containers: []
	W0814 17:40:17.002633   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:17.002644   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:17.002659   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:17.054319   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:17.054359   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:17.068024   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:17.068048   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:17.139480   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:17.139499   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:17.139514   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:17.222086   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:17.222140   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:19.758630   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:19.772186   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:19.772254   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:19.807719   80228 cri.go:89] found id: ""
	I0814 17:40:19.807751   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.807760   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:19.807766   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:19.807830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:19.851023   80228 cri.go:89] found id: ""
	I0814 17:40:19.851054   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.851067   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:19.851083   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:19.851154   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:19.882961   80228 cri.go:89] found id: ""
	I0814 17:40:19.882987   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.882997   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:19.883005   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:19.883063   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:19.920312   80228 cri.go:89] found id: ""
	I0814 17:40:19.920345   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.920356   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:19.920365   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:19.920430   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:19.953628   80228 cri.go:89] found id: ""
	I0814 17:40:19.953658   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.953671   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:19.953683   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:19.953741   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:19.984998   80228 cri.go:89] found id: ""
	I0814 17:40:19.985028   80228 logs.go:276] 0 containers: []
	W0814 17:40:19.985036   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:19.985043   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:19.985092   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:20.018728   80228 cri.go:89] found id: ""
	I0814 17:40:20.018753   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.018761   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:20.018766   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:20.018814   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:20.050718   80228 cri.go:89] found id: ""
	I0814 17:40:20.050743   80228 logs.go:276] 0 containers: []
	W0814 17:40:20.050757   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:20.050765   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:20.050777   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:20.101567   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:20.101602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:20.114890   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:20.114920   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:20.183926   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:20.183948   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:20.183960   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:20.270195   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:20.270223   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:18.513348   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.513633   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:20.445352   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.945704   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.260365   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:24.760475   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:22.807078   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:22.820187   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:22.820260   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:22.852474   80228 cri.go:89] found id: ""
	I0814 17:40:22.852504   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.852514   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:22.852522   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:22.852596   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:22.887141   80228 cri.go:89] found id: ""
	I0814 17:40:22.887167   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.887177   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:22.887184   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:22.887248   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:22.919384   80228 cri.go:89] found id: ""
	I0814 17:40:22.919417   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.919428   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:22.919436   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:22.919502   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:22.951877   80228 cri.go:89] found id: ""
	I0814 17:40:22.951897   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.951905   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:22.951910   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:22.951965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:22.987712   80228 cri.go:89] found id: ""
	I0814 17:40:22.987742   80228 logs.go:276] 0 containers: []
	W0814 17:40:22.987752   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:22.987760   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:22.987832   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:23.025562   80228 cri.go:89] found id: ""
	I0814 17:40:23.025597   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.025608   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:23.025616   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:23.025680   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:23.058928   80228 cri.go:89] found id: ""
	I0814 17:40:23.058955   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.058962   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:23.058969   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:23.059025   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:23.096807   80228 cri.go:89] found id: ""
	I0814 17:40:23.096836   80228 logs.go:276] 0 containers: []
	W0814 17:40:23.096847   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:23.096858   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:23.096874   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:23.148943   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:23.148977   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:23.161905   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:23.161927   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:23.232119   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:23.232147   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:23.232160   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:23.320693   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:23.320731   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:25.858506   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:25.871891   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:25.871964   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:25.904732   80228 cri.go:89] found id: ""
	I0814 17:40:25.904760   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.904769   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:25.904775   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:25.904830   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:25.936317   80228 cri.go:89] found id: ""
	I0814 17:40:25.936347   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.936358   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:25.936365   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:25.936427   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:25.969921   80228 cri.go:89] found id: ""
	I0814 17:40:25.969946   80228 logs.go:276] 0 containers: []
	W0814 17:40:25.969954   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:25.969960   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:25.970009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:26.022832   80228 cri.go:89] found id: ""
	I0814 17:40:26.022862   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.022872   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:26.022880   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:26.022941   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:26.056178   80228 cri.go:89] found id: ""
	I0814 17:40:26.056206   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.056214   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:26.056224   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:26.056275   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:26.086921   80228 cri.go:89] found id: ""
	I0814 17:40:26.086955   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.086966   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:26.086974   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:26.087031   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:26.120631   80228 cri.go:89] found id: ""
	I0814 17:40:26.120665   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.120677   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:26.120686   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:26.120745   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:26.154258   80228 cri.go:89] found id: ""
	I0814 17:40:26.154289   80228 logs.go:276] 0 containers: []
	W0814 17:40:26.154300   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:26.154310   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:26.154324   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:26.208366   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:26.208405   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:26.222160   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:26.222192   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:26.294737   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:26.294756   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:26.294768   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:22.513813   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:25.013707   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:25.444691   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:27.944277   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.945043   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:27.260184   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.262080   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:26.372870   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:26.372906   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:28.908165   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:28.920754   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:28.920816   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:28.953950   80228 cri.go:89] found id: ""
	I0814 17:40:28.953971   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.953978   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:28.953987   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:28.954035   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:28.985228   80228 cri.go:89] found id: ""
	I0814 17:40:28.985266   80228 logs.go:276] 0 containers: []
	W0814 17:40:28.985278   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:28.985286   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:28.985347   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:29.016295   80228 cri.go:89] found id: ""
	I0814 17:40:29.016328   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.016336   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:29.016341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:29.016392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:29.048664   80228 cri.go:89] found id: ""
	I0814 17:40:29.048696   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.048707   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:29.048715   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:29.048778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:29.080441   80228 cri.go:89] found id: ""
	I0814 17:40:29.080466   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.080474   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:29.080520   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:29.080584   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:29.112377   80228 cri.go:89] found id: ""
	I0814 17:40:29.112407   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.112418   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:29.112426   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:29.112493   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:29.145368   80228 cri.go:89] found id: ""
	I0814 17:40:29.145395   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.145403   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:29.145409   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:29.145471   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:29.177305   80228 cri.go:89] found id: ""
	I0814 17:40:29.177333   80228 logs.go:276] 0 containers: []
	W0814 17:40:29.177341   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:29.177350   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:29.177366   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:29.232156   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:29.232197   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:29.245286   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:29.245317   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:29.322257   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:29.322286   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:29.322302   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:29.397679   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:29.397714   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:27.512862   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:29.514756   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.945087   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.444743   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.760242   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.259825   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:31.935264   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:31.948380   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:31.948446   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:31.978898   80228 cri.go:89] found id: ""
	I0814 17:40:31.978925   80228 logs.go:276] 0 containers: []
	W0814 17:40:31.978932   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:31.978939   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:31.978989   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:32.010652   80228 cri.go:89] found id: ""
	I0814 17:40:32.010681   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.010692   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:32.010699   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:32.010767   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:32.044821   80228 cri.go:89] found id: ""
	I0814 17:40:32.044852   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.044860   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:32.044866   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:32.044915   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:32.076359   80228 cri.go:89] found id: ""
	I0814 17:40:32.076388   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.076398   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:32.076406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:32.076469   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:32.107652   80228 cri.go:89] found id: ""
	I0814 17:40:32.107680   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.107692   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:32.107709   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:32.107770   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:32.138445   80228 cri.go:89] found id: ""
	I0814 17:40:32.138473   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.138484   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:32.138492   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:32.138558   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:32.173771   80228 cri.go:89] found id: ""
	I0814 17:40:32.173794   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.173802   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:32.173807   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:32.173857   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:32.206387   80228 cri.go:89] found id: ""
	I0814 17:40:32.206418   80228 logs.go:276] 0 containers: []
	W0814 17:40:32.206429   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:32.206441   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:32.206454   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.258114   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:32.258148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:32.271984   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:32.272009   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:32.335423   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:32.335447   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:32.335464   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:32.411155   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:32.411206   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:34.975280   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:34.988098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:34.988176   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:35.022020   80228 cri.go:89] found id: ""
	I0814 17:40:35.022047   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.022062   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:35.022071   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:35.022124   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:35.055528   80228 cri.go:89] found id: ""
	I0814 17:40:35.055568   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.055578   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:35.055586   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:35.055647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:35.088373   80228 cri.go:89] found id: ""
	I0814 17:40:35.088404   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.088415   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:35.088422   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:35.088489   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:35.123162   80228 cri.go:89] found id: ""
	I0814 17:40:35.123188   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.123198   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:35.123206   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:35.123268   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:35.160240   80228 cri.go:89] found id: ""
	I0814 17:40:35.160267   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.160277   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:35.160286   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:35.160348   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:35.196249   80228 cri.go:89] found id: ""
	I0814 17:40:35.196276   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.196285   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:35.196293   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:35.196359   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:35.232564   80228 cri.go:89] found id: ""
	I0814 17:40:35.232588   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.232598   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:35.232606   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:35.232671   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:35.267357   80228 cri.go:89] found id: ""
	I0814 17:40:35.267383   80228 logs.go:276] 0 containers: []
	W0814 17:40:35.267392   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:35.267399   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:35.267412   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:35.279779   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:35.279806   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:35.347748   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:35.347769   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:35.347782   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:35.427900   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:35.427932   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:35.468925   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:35.468953   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:32.013942   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:34.513138   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:36.944749   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.444665   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:36.760292   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.260430   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:38.020581   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:38.034985   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:38.035066   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:38.070206   80228 cri.go:89] found id: ""
	I0814 17:40:38.070231   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.070240   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:38.070246   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:38.070294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:38.103859   80228 cri.go:89] found id: ""
	I0814 17:40:38.103885   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.103893   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:38.103898   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:38.103947   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:38.138247   80228 cri.go:89] found id: ""
	I0814 17:40:38.138271   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.138278   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:38.138285   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:38.138345   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:38.179475   80228 cri.go:89] found id: ""
	I0814 17:40:38.179511   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.179520   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:38.179526   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:38.179578   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:38.224892   80228 cri.go:89] found id: ""
	I0814 17:40:38.224922   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.224932   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:38.224940   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:38.224996   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:38.270456   80228 cri.go:89] found id: ""
	I0814 17:40:38.270485   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.270497   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:38.270504   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:38.270569   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:38.305267   80228 cri.go:89] found id: ""
	I0814 17:40:38.305300   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.305308   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:38.305315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:38.305387   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:38.336942   80228 cri.go:89] found id: ""
	I0814 17:40:38.336978   80228 logs.go:276] 0 containers: []
	W0814 17:40:38.336989   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:38.337000   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:38.337016   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:38.388618   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:38.388651   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:38.403442   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:38.403472   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:38.478225   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:38.478256   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:38.478273   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:38.553400   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:38.553440   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:41.089947   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:41.101989   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:41.102070   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:41.133743   80228 cri.go:89] found id: ""
	I0814 17:40:41.133767   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.133774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:41.133780   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:41.133828   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:41.169671   80228 cri.go:89] found id: ""
	I0814 17:40:41.169706   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.169714   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:41.169721   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:41.169773   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:41.203425   80228 cri.go:89] found id: ""
	I0814 17:40:41.203451   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.203459   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:41.203475   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:41.203534   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:41.237031   80228 cri.go:89] found id: ""
	I0814 17:40:41.237064   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.237075   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:41.237084   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:41.237149   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:41.271095   80228 cri.go:89] found id: ""
	I0814 17:40:41.271120   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.271128   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:41.271134   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:41.271190   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:41.303640   80228 cri.go:89] found id: ""
	I0814 17:40:41.303672   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.303684   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:41.303692   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:41.303755   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:37.013555   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:39.013733   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.013910   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.943472   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:43.944582   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.261795   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:43.759672   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:41.336010   80228 cri.go:89] found id: ""
	I0814 17:40:41.336047   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.336062   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:41.336071   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:41.336140   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:41.370098   80228 cri.go:89] found id: ""
	I0814 17:40:41.370133   80228 logs.go:276] 0 containers: []
	W0814 17:40:41.370143   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:41.370154   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:41.370168   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:41.420760   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:41.420794   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:41.433651   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:41.433678   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:41.506623   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:41.506644   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:41.506657   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:41.591390   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:41.591426   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:44.130649   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:44.144362   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:44.144428   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:44.178485   80228 cri.go:89] found id: ""
	I0814 17:40:44.178516   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.178527   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:44.178535   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:44.178600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:44.214231   80228 cri.go:89] found id: ""
	I0814 17:40:44.214260   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.214268   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:44.214274   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:44.214336   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:44.248483   80228 cri.go:89] found id: ""
	I0814 17:40:44.248513   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.248524   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:44.248531   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:44.248600   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:44.282445   80228 cri.go:89] found id: ""
	I0814 17:40:44.282472   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.282481   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:44.282493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:44.282560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:44.315141   80228 cri.go:89] found id: ""
	I0814 17:40:44.315169   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.315190   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:44.315198   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:44.315259   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:44.346756   80228 cri.go:89] found id: ""
	I0814 17:40:44.346781   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.346789   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:44.346795   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:44.346853   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:44.378143   80228 cri.go:89] found id: ""
	I0814 17:40:44.378172   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.378183   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:44.378191   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:44.378255   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:44.411526   80228 cri.go:89] found id: ""
	I0814 17:40:44.411557   80228 logs.go:276] 0 containers: []
	W0814 17:40:44.411567   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:44.411578   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:44.411592   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:44.459873   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:44.459913   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:44.473112   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:44.473148   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:44.547514   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:44.547546   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:44.547579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:44.630377   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:44.630415   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:43.512113   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.512590   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.945080   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:47.946506   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:45.760626   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:48.260015   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.260186   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:47.173094   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:47.185854   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:47.185927   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:47.228755   80228 cri.go:89] found id: ""
	I0814 17:40:47.228781   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.228788   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:47.228795   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:47.228851   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:47.264986   80228 cri.go:89] found id: ""
	I0814 17:40:47.265020   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.265031   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:47.265037   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:47.265100   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:47.296900   80228 cri.go:89] found id: ""
	I0814 17:40:47.296929   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.296940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:47.296947   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:47.297009   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:47.328120   80228 cri.go:89] found id: ""
	I0814 17:40:47.328147   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.328155   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:47.328161   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:47.328210   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:47.364147   80228 cri.go:89] found id: ""
	I0814 17:40:47.364171   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.364178   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:47.364184   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:47.364238   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:47.400466   80228 cri.go:89] found id: ""
	I0814 17:40:47.400493   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.400501   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:47.400507   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:47.400562   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:47.432681   80228 cri.go:89] found id: ""
	I0814 17:40:47.432713   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.432724   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:47.432732   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:47.432801   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:47.465466   80228 cri.go:89] found id: ""
	I0814 17:40:47.465498   80228 logs.go:276] 0 containers: []
	W0814 17:40:47.465510   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:47.465522   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:47.465536   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:47.502076   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:47.502114   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:47.554451   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:47.554488   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:47.567658   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:47.567690   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:47.635805   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:47.635829   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:47.635844   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:50.215353   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:50.227723   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:50.227795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:50.258250   80228 cri.go:89] found id: ""
	I0814 17:40:50.258276   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.258287   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:50.258296   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:50.258363   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:50.291371   80228 cri.go:89] found id: ""
	I0814 17:40:50.291406   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.291416   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:50.291423   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:50.291479   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:50.321449   80228 cri.go:89] found id: ""
	I0814 17:40:50.321473   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.321481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:50.321486   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:50.321545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:50.351752   80228 cri.go:89] found id: ""
	I0814 17:40:50.351780   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.351791   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:50.351799   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:50.351856   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:50.382022   80228 cri.go:89] found id: ""
	I0814 17:40:50.382050   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.382057   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:50.382063   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:50.382118   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:50.414057   80228 cri.go:89] found id: ""
	I0814 17:40:50.414083   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.414091   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:50.414098   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:50.414156   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:50.447508   80228 cri.go:89] found id: ""
	I0814 17:40:50.447530   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.447537   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:50.447543   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:50.447606   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:50.487401   80228 cri.go:89] found id: ""
	I0814 17:40:50.487425   80228 logs.go:276] 0 containers: []
	W0814 17:40:50.487434   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:50.487442   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:50.487455   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:50.524404   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:50.524439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:50.578220   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:50.578256   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:50.591405   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:50.591431   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:50.657727   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:50.657750   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:50.657762   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:47.514490   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.012588   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:50.445363   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:52.944903   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:52.760728   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:54.760918   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:53.237985   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:53.250502   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:53.250572   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:53.285728   80228 cri.go:89] found id: ""
	I0814 17:40:53.285763   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.285774   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:53.285784   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:53.285848   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:53.318195   80228 cri.go:89] found id: ""
	I0814 17:40:53.318231   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.318243   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:53.318252   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:53.318317   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:53.350259   80228 cri.go:89] found id: ""
	I0814 17:40:53.350291   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.350302   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:53.350310   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:53.350385   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:53.385894   80228 cri.go:89] found id: ""
	I0814 17:40:53.385920   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.385928   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:53.385934   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:53.385983   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:53.420851   80228 cri.go:89] found id: ""
	I0814 17:40:53.420878   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.420890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:53.420897   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:53.420963   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:53.458332   80228 cri.go:89] found id: ""
	I0814 17:40:53.458370   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.458381   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:53.458392   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:53.458465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:53.489719   80228 cri.go:89] found id: ""
	I0814 17:40:53.489750   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.489759   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:53.489765   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:53.489820   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:53.522942   80228 cri.go:89] found id: ""
	I0814 17:40:53.522977   80228 logs.go:276] 0 containers: []
	W0814 17:40:53.522988   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:53.522998   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:53.523013   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:53.599450   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:53.599492   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:53.637225   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:53.637254   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:53.688605   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:53.688647   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:53.704601   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:53.704633   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:53.775046   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.275201   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:56.288406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:56.288463   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:52.013747   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:54.513735   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:56.514335   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:55.445462   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:57.447142   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:59.946025   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:57.261047   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:59.760136   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:40:56.322862   80228 cri.go:89] found id: ""
	I0814 17:40:56.322891   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.322899   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:56.322905   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:56.322954   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:56.356214   80228 cri.go:89] found id: ""
	I0814 17:40:56.356243   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.356262   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:56.356268   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:56.356338   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:56.388877   80228 cri.go:89] found id: ""
	I0814 17:40:56.388900   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.388909   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:56.388915   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:56.388967   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:56.422552   80228 cri.go:89] found id: ""
	I0814 17:40:56.422577   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.422585   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:56.422590   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:56.422649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:56.456995   80228 cri.go:89] found id: ""
	I0814 17:40:56.457018   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.457026   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:56.457031   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:56.457079   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:56.495745   80228 cri.go:89] found id: ""
	I0814 17:40:56.495772   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.495788   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:56.495797   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:56.495868   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:56.529139   80228 cri.go:89] found id: ""
	I0814 17:40:56.529171   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.529179   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:56.529185   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:56.529237   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:56.561377   80228 cri.go:89] found id: ""
	I0814 17:40:56.561406   80228 logs.go:276] 0 containers: []
	W0814 17:40:56.561414   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:56.561424   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:56.561439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:56.601504   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:56.601537   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:56.653369   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:56.653403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:56.666117   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:56.666144   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:56.731921   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:56.731949   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:56.731963   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.315712   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:40:59.328425   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:40:59.328486   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:40:59.364056   80228 cri.go:89] found id: ""
	I0814 17:40:59.364080   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.364088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:40:59.364094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:40:59.364151   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:40:59.398948   80228 cri.go:89] found id: ""
	I0814 17:40:59.398971   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.398978   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:40:59.398984   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:40:59.399029   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:40:59.430301   80228 cri.go:89] found id: ""
	I0814 17:40:59.430327   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.430335   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:40:59.430341   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:40:59.430406   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:40:59.465278   80228 cri.go:89] found id: ""
	I0814 17:40:59.465301   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.465309   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:40:59.465315   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:40:59.465372   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:40:59.497544   80228 cri.go:89] found id: ""
	I0814 17:40:59.497575   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.497586   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:40:59.497595   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:40:59.497659   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:40:59.529463   80228 cri.go:89] found id: ""
	I0814 17:40:59.529494   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.529506   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:40:59.529513   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:40:59.529587   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:40:59.562448   80228 cri.go:89] found id: ""
	I0814 17:40:59.562477   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.562487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:40:59.562495   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:40:59.562609   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:40:59.594059   80228 cri.go:89] found id: ""
	I0814 17:40:59.594089   80228 logs.go:276] 0 containers: []
	W0814 17:40:59.594103   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:40:59.594112   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:40:59.594123   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:40:59.672139   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:40:59.672172   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:40:59.710714   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:40:59.710743   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:40:59.762645   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:40:59.762676   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:40:59.776006   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:40:59.776033   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:40:59.838187   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:40:59.013030   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:01.013280   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.445595   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:04.944484   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.260244   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:04.760862   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:02.338964   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:02.351381   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:02.351460   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:02.383206   80228 cri.go:89] found id: ""
	I0814 17:41:02.383235   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.383244   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:02.383250   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:02.383310   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:02.417016   80228 cri.go:89] found id: ""
	I0814 17:41:02.417042   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.417049   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:02.417055   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:02.417111   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:02.451936   80228 cri.go:89] found id: ""
	I0814 17:41:02.451964   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.451974   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:02.451982   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:02.452042   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:02.489896   80228 cri.go:89] found id: ""
	I0814 17:41:02.489927   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.489937   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:02.489945   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:02.490011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:02.524273   80228 cri.go:89] found id: ""
	I0814 17:41:02.524308   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.524339   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:02.524346   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:02.524409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:02.558813   80228 cri.go:89] found id: ""
	I0814 17:41:02.558842   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.558850   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:02.558861   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:02.558917   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:02.592704   80228 cri.go:89] found id: ""
	I0814 17:41:02.592733   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.592747   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:02.592753   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:02.592818   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:02.625250   80228 cri.go:89] found id: ""
	I0814 17:41:02.625277   80228 logs.go:276] 0 containers: []
	W0814 17:41:02.625288   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:02.625299   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:02.625312   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:02.677577   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:02.677613   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:02.691407   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:02.691439   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:02.756797   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:02.756869   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:02.756888   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:02.830803   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:02.830842   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.370085   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:05.385272   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:05.385342   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:05.421775   80228 cri.go:89] found id: ""
	I0814 17:41:05.421799   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.421806   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:05.421812   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:05.421860   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:05.457054   80228 cri.go:89] found id: ""
	I0814 17:41:05.457083   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.457093   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:05.457100   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:05.457153   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:05.489290   80228 cri.go:89] found id: ""
	I0814 17:41:05.489330   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.489338   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:05.489345   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:05.489392   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:05.527066   80228 cri.go:89] found id: ""
	I0814 17:41:05.527091   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.527098   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:05.527105   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:05.527155   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:05.563882   80228 cri.go:89] found id: ""
	I0814 17:41:05.563915   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.563925   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:05.563931   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:05.563982   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:05.601837   80228 cri.go:89] found id: ""
	I0814 17:41:05.601863   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.601871   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:05.601879   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:05.601940   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:05.633503   80228 cri.go:89] found id: ""
	I0814 17:41:05.633531   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.633539   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:05.633545   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:05.633615   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:05.668281   80228 cri.go:89] found id: ""
	I0814 17:41:05.668312   80228 logs.go:276] 0 containers: []
	W0814 17:41:05.668324   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:05.668335   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:05.668349   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:05.747214   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:05.747249   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:05.784408   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:05.784441   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:05.835067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:05.835103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:05.847938   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:05.847966   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:05.917404   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:03.513033   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:05.514476   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:06.944595   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:08.944850   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:07.260430   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:09.762513   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:08.417559   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:08.431092   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:08.431165   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:08.465357   80228 cri.go:89] found id: ""
	I0814 17:41:08.465515   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.465543   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:08.465560   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:08.465675   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:08.499085   80228 cri.go:89] found id: ""
	I0814 17:41:08.499114   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.499123   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:08.499129   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:08.499180   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:08.533881   80228 cri.go:89] found id: ""
	I0814 17:41:08.533909   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.533917   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:08.533922   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:08.533972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:08.570503   80228 cri.go:89] found id: ""
	I0814 17:41:08.570549   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.570560   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:08.570572   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:08.570649   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:08.602557   80228 cri.go:89] found id: ""
	I0814 17:41:08.602599   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.602610   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:08.602691   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:08.602785   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:08.636174   80228 cri.go:89] found id: ""
	I0814 17:41:08.636199   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.636206   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:08.636213   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:08.636261   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:08.672774   80228 cri.go:89] found id: ""
	I0814 17:41:08.672804   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.672815   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:08.672823   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:08.672890   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:08.705535   80228 cri.go:89] found id: ""
	I0814 17:41:08.705590   80228 logs.go:276] 0 containers: []
	W0814 17:41:08.705605   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:08.705622   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:08.705641   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:08.744315   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:08.744341   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:08.794632   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:08.794666   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:08.808089   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:08.808117   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:08.876417   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:08.876436   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:08.876452   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:08.013688   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:10.512639   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:11.444206   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:13.944056   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:12.260065   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:14.759640   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:11.458562   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:11.470905   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:11.470965   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:11.505992   80228 cri.go:89] found id: ""
	I0814 17:41:11.506023   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.506036   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:11.506044   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:11.506112   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:11.540893   80228 cri.go:89] found id: ""
	I0814 17:41:11.540922   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.540932   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:11.540945   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:11.541001   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:11.575423   80228 cri.go:89] found id: ""
	I0814 17:41:11.575448   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.575455   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:11.575462   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:11.575520   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:11.608126   80228 cri.go:89] found id: ""
	I0814 17:41:11.608155   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.608164   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:11.608171   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:11.608222   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:11.640165   80228 cri.go:89] found id: ""
	I0814 17:41:11.640190   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.640198   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:11.640204   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:11.640263   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:11.674425   80228 cri.go:89] found id: ""
	I0814 17:41:11.674446   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.674455   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:11.674460   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:11.674513   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:11.707448   80228 cri.go:89] found id: ""
	I0814 17:41:11.707477   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.707487   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:11.707493   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:11.707555   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:11.744309   80228 cri.go:89] found id: ""
	I0814 17:41:11.744338   80228 logs.go:276] 0 containers: []
	W0814 17:41:11.744346   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:11.744363   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:11.744375   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:11.824165   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:11.824196   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:11.862013   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:11.862039   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:11.913862   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:11.913902   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:11.927147   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:11.927178   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:11.998403   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.498590   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:14.512847   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:14.512938   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:14.549255   80228 cri.go:89] found id: ""
	I0814 17:41:14.549288   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.549306   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:14.549316   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:14.549382   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:14.588917   80228 cri.go:89] found id: ""
	I0814 17:41:14.588948   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.588956   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:14.588963   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:14.589012   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:14.622581   80228 cri.go:89] found id: ""
	I0814 17:41:14.622611   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.622621   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:14.622628   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:14.622693   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:14.656029   80228 cri.go:89] found id: ""
	I0814 17:41:14.656056   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.656064   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:14.656070   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:14.656117   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:14.687502   80228 cri.go:89] found id: ""
	I0814 17:41:14.687527   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.687536   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:14.687541   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:14.687614   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:14.720682   80228 cri.go:89] found id: ""
	I0814 17:41:14.720713   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.720721   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:14.720728   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:14.720778   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:14.752482   80228 cri.go:89] found id: ""
	I0814 17:41:14.752511   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.752520   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:14.752525   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:14.752577   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:14.792980   80228 cri.go:89] found id: ""
	I0814 17:41:14.793004   80228 logs.go:276] 0 containers: []
	W0814 17:41:14.793014   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:14.793026   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:14.793042   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:14.845259   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:14.845297   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:14.858530   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:14.858556   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:14.931025   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:14.931054   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:14.931067   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:15.008081   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:15.008115   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:13.014174   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:15.512768   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:16.444772   79521 pod_ready.go:102] pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:16.444802   79521 pod_ready.go:81] duration metric: took 4m0.006448573s for pod "metrics-server-6867b74b74-jflvw" in "kube-system" namespace to be "Ready" ...
	E0814 17:41:16.444810   79521 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0814 17:41:16.444817   79521 pod_ready.go:38] duration metric: took 4m5.044051569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:41:16.444832   79521 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:41:16.444858   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:16.444901   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:16.499710   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:16.499742   79521 cri.go:89] found id: ""
	I0814 17:41:16.499751   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:16.499815   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.504467   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:16.504544   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:16.546815   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:16.546842   79521 cri.go:89] found id: ""
	I0814 17:41:16.546851   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:16.546905   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.550917   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:16.550986   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:16.590195   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:16.590216   79521 cri.go:89] found id: ""
	I0814 17:41:16.590224   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:16.590267   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.594123   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:16.594196   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:16.631058   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:16.631091   79521 cri.go:89] found id: ""
	I0814 17:41:16.631101   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:16.631163   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.635151   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:16.635226   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:16.671555   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:16.671582   79521 cri.go:89] found id: ""
	I0814 17:41:16.671592   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:16.671657   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.675790   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:16.675847   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:16.713131   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:16.713157   79521 cri.go:89] found id: ""
	I0814 17:41:16.713165   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:16.713217   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.717296   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:16.717354   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:16.756212   79521 cri.go:89] found id: ""
	I0814 17:41:16.756245   79521 logs.go:276] 0 containers: []
	W0814 17:41:16.756255   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:16.756261   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:16.756324   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:16.802379   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:16.802411   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:16.802417   79521 cri.go:89] found id: ""
	I0814 17:41:16.802431   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:16.802492   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.807105   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:16.811210   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:16.811241   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:16.852490   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:16.852526   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:16.894384   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:16.894425   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:16.929919   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:16.929949   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:16.965031   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:16.965061   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:17.468878   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.468945   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.482799   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.482826   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:17.610874   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:17.610904   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:17.649292   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:17.649322   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:17.691014   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:17.691045   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:17.749218   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.749254   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.794240   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.794280   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.868805   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:17.868851   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:16.760328   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:18.760369   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:17.544873   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:17.557699   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:17.557791   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:17.600314   80228 cri.go:89] found id: ""
	I0814 17:41:17.600347   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.600360   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:17.600370   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:17.600441   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:17.634873   80228 cri.go:89] found id: ""
	I0814 17:41:17.634902   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.634914   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:17.634923   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:17.634986   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:17.670521   80228 cri.go:89] found id: ""
	I0814 17:41:17.670552   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.670563   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:17.670571   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:17.670647   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:17.705587   80228 cri.go:89] found id: ""
	I0814 17:41:17.705612   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.705626   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:17.705632   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:17.705682   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:17.768178   80228 cri.go:89] found id: ""
	I0814 17:41:17.768207   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.768218   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:17.768226   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:17.768290   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:17.804692   80228 cri.go:89] found id: ""
	I0814 17:41:17.804721   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.804729   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:17.804735   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:17.804795   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:17.847994   80228 cri.go:89] found id: ""
	I0814 17:41:17.848030   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.848041   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:17.848052   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:17.848122   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:17.883905   80228 cri.go:89] found id: ""
	I0814 17:41:17.883935   80228 logs.go:276] 0 containers: []
	W0814 17:41:17.883944   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:17.883953   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:17.883965   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:17.931481   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:17.931522   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:17.983315   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:17.983363   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.996941   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:17.996981   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:18.067254   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:18.067279   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:18.067295   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:20.642099   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.655941   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.656014   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.692525   80228 cri.go:89] found id: ""
	I0814 17:41:20.692554   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.692565   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:20.692577   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.692634   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.727721   80228 cri.go:89] found id: ""
	I0814 17:41:20.727755   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.727769   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:20.727778   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.727845   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.770441   80228 cri.go:89] found id: ""
	I0814 17:41:20.770471   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.770481   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:20.770488   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.770550   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.807932   80228 cri.go:89] found id: ""
	I0814 17:41:20.807961   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.807968   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:20.807975   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.808030   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.849919   80228 cri.go:89] found id: ""
	I0814 17:41:20.849944   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.849963   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:20.849970   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.850045   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.887351   80228 cri.go:89] found id: ""
	I0814 17:41:20.887382   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.887393   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:20.887401   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.887465   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.921284   80228 cri.go:89] found id: ""
	I0814 17:41:20.921310   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.921321   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.921328   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:20.921409   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:20.955238   80228 cri.go:89] found id: ""
	I0814 17:41:20.955267   80228 logs.go:276] 0 containers: []
	W0814 17:41:20.955278   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:20.955288   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.955314   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:21.024544   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:21.024565   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.024579   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:21.103987   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.104019   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.145515   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.145550   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.197307   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.197346   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:17.514682   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:20.015152   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:20.429364   79521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:20.445075   79521 api_server.go:72] duration metric: took 4m16.759338748s to wait for apiserver process to appear ...
	I0814 17:41:20.445102   79521 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:41:20.445133   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:20.445179   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:20.477630   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:20.477655   79521 cri.go:89] found id: ""
	I0814 17:41:20.477663   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:20.477714   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.481667   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:20.481728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:20.514443   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:20.514465   79521 cri.go:89] found id: ""
	I0814 17:41:20.514473   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:20.514516   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.518344   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:20.518401   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:20.559625   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:20.559647   79521 cri.go:89] found id: ""
	I0814 17:41:20.559653   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:20.559706   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.564137   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:20.564203   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:20.603504   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:20.603531   79521 cri.go:89] found id: ""
	I0814 17:41:20.603540   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:20.603602   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.608260   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:20.608334   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:20.641466   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:20.641487   79521 cri.go:89] found id: ""
	I0814 17:41:20.641494   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:20.641538   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.645566   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:20.645625   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:20.685003   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:20.685032   79521 cri.go:89] found id: ""
	I0814 17:41:20.685042   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:20.685104   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.690347   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:20.690429   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:20.733753   79521 cri.go:89] found id: ""
	I0814 17:41:20.733782   79521 logs.go:276] 0 containers: []
	W0814 17:41:20.733793   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:20.733800   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:20.733862   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:20.781659   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:20.781683   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:20.781689   79521 cri.go:89] found id: ""
	I0814 17:41:20.781697   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:20.781753   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.786293   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:20.790358   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:20.790377   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:20.916473   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:20.916513   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:20.968706   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:20.968743   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:21.003507   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:21.003546   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:21.049909   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:21.049961   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:21.090052   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:21.090080   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:21.129551   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:21.129585   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:21.174792   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:21.174828   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:21.247392   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:21.247440   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:21.261095   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:21.261129   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:21.306583   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:21.306616   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:21.339602   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:21.339642   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:21.397695   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:21.397732   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.301807   79521 api_server.go:253] Checking apiserver healthz at https://192.168.61.2:8443/healthz ...
	I0814 17:41:24.306392   79521 api_server.go:279] https://192.168.61.2:8443/healthz returned 200:
	ok
	I0814 17:41:24.307364   79521 api_server.go:141] control plane version: v1.31.0
	I0814 17:41:24.307390   79521 api_server.go:131] duration metric: took 3.862280551s to wait for apiserver health ...
	I0814 17:41:24.307398   79521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:41:24.307418   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:24.307463   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:24.342519   79521 cri.go:89] found id: "221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:24.342552   79521 cri.go:89] found id: ""
	I0814 17:41:24.342561   79521 logs.go:276] 1 containers: [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0]
	I0814 17:41:24.342627   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.346361   79521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:24.346422   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:24.386973   79521 cri.go:89] found id: "4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:24.387001   79521 cri.go:89] found id: ""
	I0814 17:41:24.387012   79521 logs.go:276] 1 containers: [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c]
	I0814 17:41:24.387066   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.390942   79521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:24.390999   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:24.426841   79521 cri.go:89] found id: "0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:24.426863   79521 cri.go:89] found id: ""
	I0814 17:41:24.426872   79521 logs.go:276] 1 containers: [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03]
	I0814 17:41:24.426927   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.430856   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:24.430917   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:24.467024   79521 cri.go:89] found id: "e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:24.467050   79521 cri.go:89] found id: ""
	I0814 17:41:24.467059   79521 logs.go:276] 1 containers: [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5]
	I0814 17:41:24.467117   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.471659   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:24.471728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:24.506759   79521 cri.go:89] found id: "4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:24.506786   79521 cri.go:89] found id: ""
	I0814 17:41:24.506799   79521 logs.go:276] 1 containers: [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052]
	I0814 17:41:24.506857   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.511660   79521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:24.511728   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:24.547768   79521 cri.go:89] found id: "038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:24.547795   79521 cri.go:89] found id: ""
	I0814 17:41:24.547805   79521 logs.go:276] 1 containers: [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535]
	I0814 17:41:24.547862   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.552881   79521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:24.552941   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:24.588519   79521 cri.go:89] found id: ""
	I0814 17:41:24.588544   79521 logs.go:276] 0 containers: []
	W0814 17:41:24.588551   79521 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:24.588557   79521 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0814 17:41:24.588602   79521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0814 17:41:24.624604   79521 cri.go:89] found id: "b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:24.624626   79521 cri.go:89] found id: "bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:24.624630   79521 cri.go:89] found id: ""
	I0814 17:41:24.624636   79521 logs.go:276] 2 containers: [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94]
	I0814 17:41:24.624691   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.628703   79521 ssh_runner.go:195] Run: which crictl
	I0814 17:41:24.632611   79521 logs.go:123] Gathering logs for kube-scheduler [e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5] ...
	I0814 17:41:24.632636   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2594588a11a2cb4611c8f5bac47f6bcef703886413e17e590d9b45c66488ce5"
	I0814 17:41:24.671903   79521 logs.go:123] Gathering logs for storage-provisioner [b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b] ...
	I0814 17:41:24.671935   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c13e26940573daf5ea168c2af51f5e94aee9d33eb69401457b1140c61d224b"
	I0814 17:41:24.709821   79521 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.709851   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:25.107477   79521 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:25.107515   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0814 17:41:25.221012   79521 logs.go:123] Gathering logs for etcd [4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c] ...
	I0814 17:41:25.221041   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3a19329bb34bb04d5cbc469e433826741e265d0031bce8b453b406fa44627c"
	I0814 17:41:20.760924   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:23.259780   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.260347   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:23.712584   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:23.726467   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:23.726545   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:23.762871   80228 cri.go:89] found id: ""
	I0814 17:41:23.762906   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.762916   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:23.762922   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:23.762972   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:23.800068   80228 cri.go:89] found id: ""
	I0814 17:41:23.800096   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.800105   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:23.800113   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:23.800173   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:23.834913   80228 cri.go:89] found id: ""
	I0814 17:41:23.834945   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.834956   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:23.834963   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:23.835022   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:23.871196   80228 cri.go:89] found id: ""
	I0814 17:41:23.871222   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.871233   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:23.871240   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:23.871294   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:23.907830   80228 cri.go:89] found id: ""
	I0814 17:41:23.907854   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.907862   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:23.907868   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:23.907926   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:23.941110   80228 cri.go:89] found id: ""
	I0814 17:41:23.941133   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.941141   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:23.941146   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:23.941197   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:23.973602   80228 cri.go:89] found id: ""
	I0814 17:41:23.973631   80228 logs.go:276] 0 containers: []
	W0814 17:41:23.973649   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:23.973655   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:23.973710   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:24.007398   80228 cri.go:89] found id: ""
	I0814 17:41:24.007436   80228 logs.go:276] 0 containers: []
	W0814 17:41:24.007450   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:24.007462   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:24.007478   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:24.061830   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:24.061867   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:24.075012   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:24.075046   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:24.148666   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:24.148692   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:24.148703   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:24.230208   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:24.230248   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:22.513616   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.013383   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:25.272397   79521 logs.go:123] Gathering logs for coredns [0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03] ...
	I0814 17:41:25.272429   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ac264c97809e749c425c957dc7b7e532748c105b89bdbd86dbeed2e30c4ca03"
	I0814 17:41:25.317574   79521 logs.go:123] Gathering logs for kube-proxy [4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052] ...
	I0814 17:41:25.317603   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b094a20accacdf3700db9d21c259d7569bb68708c41f9a46cb38bdeab450052"
	I0814 17:41:25.352239   79521 logs.go:123] Gathering logs for kube-controller-manager [038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535] ...
	I0814 17:41:25.352271   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 038cd1233632225b6711672b9d1bf2d934538c165325291176bcc415af841535"
	I0814 17:41:25.409997   79521 logs.go:123] Gathering logs for storage-provisioner [bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94] ...
	I0814 17:41:25.410030   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdac981ff1f5d171c611f8faaad8ab26db95100695b839a88dada2514a710d94"
	I0814 17:41:25.443875   79521 logs.go:123] Gathering logs for container status ...
	I0814 17:41:25.443899   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:25.490987   79521 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:25.491023   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:25.563495   79521 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:25.563531   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:25.577305   79521 logs.go:123] Gathering logs for kube-apiserver [221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0] ...
	I0814 17:41:25.577345   79521 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221f94a9fa6afe86b45393c5dff923413d1bb9ff9fca571968a79ebad3095bc0"
	I0814 17:41:28.147823   79521 system_pods.go:59] 8 kube-system pods found
	I0814 17:41:28.147855   79521 system_pods.go:61] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running
	I0814 17:41:28.147860   79521 system_pods.go:61] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running
	I0814 17:41:28.147864   79521 system_pods.go:61] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running
	I0814 17:41:28.147867   79521 system_pods.go:61] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running
	I0814 17:41:28.147870   79521 system_pods.go:61] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running
	I0814 17:41:28.147874   79521 system_pods.go:61] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running
	I0814 17:41:28.147879   79521 system_pods.go:61] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:41:28.147883   79521 system_pods.go:61] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running
	I0814 17:41:28.147890   79521 system_pods.go:74] duration metric: took 3.840486938s to wait for pod list to return data ...
	I0814 17:41:28.147898   79521 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:41:28.150377   79521 default_sa.go:45] found service account: "default"
	I0814 17:41:28.150398   79521 default_sa.go:55] duration metric: took 2.493777ms for default service account to be created ...
	I0814 17:41:28.150406   79521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:41:28.154470   79521 system_pods.go:86] 8 kube-system pods found
	I0814 17:41:28.154494   79521 system_pods.go:89] "coredns-6f6b679f8f-kccp8" [db961449-4326-4700-a3e0-c11ab96df3ae] Running
	I0814 17:41:28.154500   79521 system_pods.go:89] "etcd-embed-certs-309673" [944027b2-a99a-42b5-b947-20d710ac8a40] Running
	I0814 17:41:28.154504   79521 system_pods.go:89] "kube-apiserver-embed-certs-309673" [f029b5f0-c907-413a-ae22-f8a5f36b2904] Running
	I0814 17:41:28.154510   79521 system_pods.go:89] "kube-controller-manager-embed-certs-309673" [8be96015-f424-4d47-8df4-5fb3b2928a22] Running
	I0814 17:41:28.154514   79521 system_pods.go:89] "kube-proxy-z8x9t" [c84ae0e0-8205-4854-82ba-0119b81efe2a] Running
	I0814 17:41:28.154519   79521 system_pods.go:89] "kube-scheduler-embed-certs-309673" [6a6aef8e-a9e6-461b-a624-8c7c8765b71c] Running
	I0814 17:41:28.154525   79521 system_pods.go:89] "metrics-server-6867b74b74-jflvw" [69a57151-6948-46ea-bacf-0915ea90fe44] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:41:28.154530   79521 system_pods.go:89] "storage-provisioner" [0c7d9343-7223-4e8a-9a23-151b98873700] Running
	I0814 17:41:28.154537   79521 system_pods.go:126] duration metric: took 4.125964ms to wait for k8s-apps to be running ...
	I0814 17:41:28.154544   79521 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:41:28.154585   79521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:28.170494   79521 system_svc.go:56] duration metric: took 15.940728ms WaitForService to wait for kubelet
	I0814 17:41:28.170524   79521 kubeadm.go:582] duration metric: took 4m24.484791018s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:41:28.170545   79521 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:41:28.173368   79521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:41:28.173395   79521 node_conditions.go:123] node cpu capacity is 2
	I0814 17:41:28.173407   79521 node_conditions.go:105] duration metric: took 2.858344ms to run NodePressure ...
	I0814 17:41:28.173417   79521 start.go:241] waiting for startup goroutines ...
	I0814 17:41:28.173424   79521 start.go:246] waiting for cluster config update ...
	I0814 17:41:28.173435   79521 start.go:255] writing updated cluster config ...
	I0814 17:41:28.173730   79521 ssh_runner.go:195] Run: rm -f paused
	I0814 17:41:28.219460   79521 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:41:28.221461   79521 out.go:177] * Done! kubectl is now configured to use "embed-certs-309673" cluster and "default" namespace by default
	I0814 17:41:27.761580   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:30.260454   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:26.776204   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:26.789057   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:26.789132   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:26.822531   80228 cri.go:89] found id: ""
	I0814 17:41:26.822564   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.822575   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:26.822590   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:26.822651   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:26.855314   80228 cri.go:89] found id: ""
	I0814 17:41:26.855353   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.855365   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:26.855372   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:26.855434   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:26.889389   80228 cri.go:89] found id: ""
	I0814 17:41:26.889413   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.889421   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:26.889427   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:26.889485   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:26.925478   80228 cri.go:89] found id: ""
	I0814 17:41:26.925500   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.925508   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:26.925514   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:26.925560   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:26.957012   80228 cri.go:89] found id: ""
	I0814 17:41:26.957042   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.957053   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:26.957061   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:26.957114   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:26.989358   80228 cri.go:89] found id: ""
	I0814 17:41:26.989388   80228 logs.go:276] 0 containers: []
	W0814 17:41:26.989399   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:26.989406   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:26.989468   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:27.024761   80228 cri.go:89] found id: ""
	I0814 17:41:27.024786   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.024805   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:27.024830   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:27.024895   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:27.059172   80228 cri.go:89] found id: ""
	I0814 17:41:27.059204   80228 logs.go:276] 0 containers: []
	W0814 17:41:27.059215   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:27.059226   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:27.059240   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.096123   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:27.096151   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:27.147689   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:27.147719   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:27.161454   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:27.161483   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:27.234644   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:27.234668   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:27.234680   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:29.817428   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:29.831731   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:29.831811   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:29.868531   80228 cri.go:89] found id: ""
	I0814 17:41:29.868567   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.868577   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:29.868585   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:29.868657   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:29.913578   80228 cri.go:89] found id: ""
	I0814 17:41:29.913602   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.913611   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:29.913617   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:29.913677   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:29.963916   80228 cri.go:89] found id: ""
	I0814 17:41:29.963939   80228 logs.go:276] 0 containers: []
	W0814 17:41:29.963946   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:29.963952   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:29.964011   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:30.016735   80228 cri.go:89] found id: ""
	I0814 17:41:30.016763   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.016773   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:30.016781   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:30.016841   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:30.048852   80228 cri.go:89] found id: ""
	I0814 17:41:30.048880   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.048890   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:30.048898   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:30.048960   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:30.080291   80228 cri.go:89] found id: ""
	I0814 17:41:30.080324   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.080335   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:30.080343   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:30.080506   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:30.113876   80228 cri.go:89] found id: ""
	I0814 17:41:30.113904   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.113914   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:30.113921   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:30.113984   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:30.147568   80228 cri.go:89] found id: ""
	I0814 17:41:30.147594   80228 logs.go:276] 0 containers: []
	W0814 17:41:30.147604   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:30.147614   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:30.147627   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:30.197596   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:30.197630   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:30.210576   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:30.210602   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:30.277711   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:30.277731   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:30.277746   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:30.356556   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:30.356590   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:27.013699   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:29.014020   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:31.512974   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:32.760328   79871 pod_ready.go:102] pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:35.254066   79871 pod_ready.go:81] duration metric: took 4m0.000392709s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" ...
	E0814 17:41:35.254095   79871 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-qtzm8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 17:41:35.254112   79871 pod_ready.go:38] duration metric: took 4m12.044429915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:41:35.254137   79871 kubeadm.go:597] duration metric: took 4m20.041916203s to restartPrimaryControlPlane
	W0814 17:41:35.254189   79871 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:35.254218   79871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:32.892697   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:32.909435   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:32.909497   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:32.945055   80228 cri.go:89] found id: ""
	I0814 17:41:32.945080   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.945088   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:32.945094   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:32.945150   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:32.979266   80228 cri.go:89] found id: ""
	I0814 17:41:32.979294   80228 logs.go:276] 0 containers: []
	W0814 17:41:32.979305   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:32.979312   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:32.979398   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:33.014260   80228 cri.go:89] found id: ""
	I0814 17:41:33.014286   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.014294   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:33.014299   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:33.014351   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:33.047590   80228 cri.go:89] found id: ""
	I0814 17:41:33.047622   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.047633   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:33.047646   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:33.047711   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:33.081258   80228 cri.go:89] found id: ""
	I0814 17:41:33.081294   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.081328   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:33.081337   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:33.081403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:33.112209   80228 cri.go:89] found id: ""
	I0814 17:41:33.112237   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.112247   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:33.112254   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:33.112318   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:33.143854   80228 cri.go:89] found id: ""
	I0814 17:41:33.143892   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.143904   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:33.143913   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:33.143977   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:33.175147   80228 cri.go:89] found id: ""
	I0814 17:41:33.175190   80228 logs.go:276] 0 containers: []
	W0814 17:41:33.175201   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:33.175212   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:33.175226   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:33.212877   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:33.212908   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:33.268067   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:33.268103   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:33.281357   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:33.281386   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:33.350233   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:33.350257   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:33.350269   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:35.929498   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:35.942290   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:35.942354   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:35.975782   80228 cri.go:89] found id: ""
	I0814 17:41:35.975809   80228 logs.go:276] 0 containers: []
	W0814 17:41:35.975818   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:35.975826   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:35.975886   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:36.008165   80228 cri.go:89] found id: ""
	I0814 17:41:36.008191   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.008200   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:36.008206   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:36.008262   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:36.044912   80228 cri.go:89] found id: ""
	I0814 17:41:36.044937   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.044945   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:36.044954   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:36.045002   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:36.078068   80228 cri.go:89] found id: ""
	I0814 17:41:36.078096   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.078108   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:36.078116   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:36.078179   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:36.110429   80228 cri.go:89] found id: ""
	I0814 17:41:36.110456   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.110467   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:36.110480   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:36.110540   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:36.142086   80228 cri.go:89] found id: ""
	I0814 17:41:36.142111   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.142119   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:36.142125   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:36.142186   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:36.172738   80228 cri.go:89] found id: ""
	I0814 17:41:36.172761   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.172769   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:36.172775   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:36.172831   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:36.204345   80228 cri.go:89] found id: ""
	I0814 17:41:36.204368   80228 logs.go:276] 0 containers: []
	W0814 17:41:36.204376   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:36.204388   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:36.204403   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:36.216667   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:36.216689   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:36.279509   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:36.279528   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:36.279540   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:33.513591   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:36.013400   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:36.360411   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:36.360447   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:36.398193   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:36.398230   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:38.952415   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:38.968484   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:41:38.968554   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:41:39.002450   80228 cri.go:89] found id: ""
	I0814 17:41:39.002479   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.002486   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:41:39.002493   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:41:39.002551   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:41:39.035840   80228 cri.go:89] found id: ""
	I0814 17:41:39.035868   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.035876   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:41:39.035882   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:41:39.035934   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:41:39.069900   80228 cri.go:89] found id: ""
	I0814 17:41:39.069929   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.069940   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:41:39.069946   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:41:39.069999   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:41:39.104657   80228 cri.go:89] found id: ""
	I0814 17:41:39.104681   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.104689   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:41:39.104695   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:41:39.104751   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:41:39.137279   80228 cri.go:89] found id: ""
	I0814 17:41:39.137312   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.137322   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:41:39.137330   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:41:39.137403   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:41:39.170377   80228 cri.go:89] found id: ""
	I0814 17:41:39.170414   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.170424   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:41:39.170430   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:41:39.170491   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:41:39.205742   80228 cri.go:89] found id: ""
	I0814 17:41:39.205779   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.205790   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:41:39.205796   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:41:39.205850   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:41:39.239954   80228 cri.go:89] found id: ""
	I0814 17:41:39.239979   80228 logs.go:276] 0 containers: []
	W0814 17:41:39.239987   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:41:39.239994   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:41:39.240011   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:41:39.276587   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:41:39.276619   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:41:39.329286   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:41:39.329322   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:41:39.342232   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:41:39.342257   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:41:39.411043   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:41:39.411063   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:41:39.411075   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0814 17:41:38.013562   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:40.013740   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:41.994479   80228 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:41:42.007736   80228 kubeadm.go:597] duration metric: took 4m4.488869114s to restartPrimaryControlPlane
	W0814 17:41:42.007822   80228 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:41:42.007871   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:41:42.513259   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:45.013455   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:46.541593   80228 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.533697889s)
	I0814 17:41:46.541676   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:41:46.556181   80228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:41:46.565943   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:41:46.575481   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:41:46.575501   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:41:46.575549   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:41:46.585143   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:41:46.585202   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:41:46.595157   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:41:46.604539   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:41:46.604600   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:41:46.613345   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.622186   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:41:46.622242   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:41:46.631221   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:41:46.640649   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:41:46.640706   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:41:46.650161   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:41:46.724104   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:41:46.724182   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:41:46.860463   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:41:46.860606   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:41:46.860725   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:41:47.036697   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:41:47.038444   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:41:47.038561   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:41:47.038670   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:41:47.038775   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:41:47.038860   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:41:47.038973   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:41:47.039067   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:41:47.039172   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:41:47.039256   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:41:47.039359   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:41:47.039456   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:41:47.039516   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:41:47.039587   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:41:47.278696   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:41:47.664300   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:41:47.988137   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:41:48.076560   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:41:48.093447   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:41:48.094656   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:41:48.094793   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:41:48.253225   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:41:48.255034   80228 out.go:204]   - Booting up control plane ...
	I0814 17:41:48.255160   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:41:48.259041   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:41:48.260074   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:41:48.260862   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:41:48.262910   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:41:47.513415   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:50.012937   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:52.013499   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:54.514150   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:57.013146   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:41:59.013393   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:01.014185   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:01.441261   79871 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.187019598s)
	I0814 17:42:01.441333   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:01.457213   79871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:42:01.466802   79871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:42:01.475719   79871 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:42:01.475736   79871 kubeadm.go:157] found existing configuration files:
	
	I0814 17:42:01.475784   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0814 17:42:01.484555   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:42:01.484618   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:42:01.493956   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0814 17:42:01.503873   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:42:01.503923   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:42:01.514710   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0814 17:42:01.524473   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:42:01.524531   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:42:01.534749   79871 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0814 17:42:01.544491   79871 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:42:01.544558   79871 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:42:01.555481   79871 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:42:01.599801   79871 kubeadm.go:310] W0814 17:42:01.575622    2598 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:01.600615   79871 kubeadm.go:310] W0814 17:42:01.576625    2598 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:01.703064   79871 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:42:03.513007   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:05.514241   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:09.627141   79871 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:42:09.627216   79871 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:42:09.627344   79871 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:42:09.627480   79871 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:42:09.627638   79871 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:42:09.627717   79871 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:42:09.629272   79871 out.go:204]   - Generating certificates and keys ...
	I0814 17:42:09.629370   79871 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:42:09.629472   79871 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:42:09.629592   79871 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:42:09.629712   79871 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:42:09.629780   79871 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:42:09.629826   79871 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:42:09.629898   79871 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:42:09.629963   79871 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:42:09.630076   79871 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:42:09.630198   79871 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:42:09.630253   79871 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:42:09.630314   79871 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:42:09.630357   79871 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:42:09.630412   79871 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:42:09.630457   79871 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:42:09.630509   79871 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:42:09.630560   79871 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:42:09.630629   79871 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:42:09.630688   79871 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:42:09.632664   79871 out.go:204]   - Booting up control plane ...
	I0814 17:42:09.632763   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:42:09.632878   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:42:09.632963   79871 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:42:09.633100   79871 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:42:09.633207   79871 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:42:09.633252   79871 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:42:09.633412   79871 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:42:09.633542   79871 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:42:09.633624   79871 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004125702s
	I0814 17:42:09.633727   79871 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:42:09.633814   79871 kubeadm.go:310] [api-check] The API server is healthy after 4.501648596s
	I0814 17:42:09.633967   79871 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:42:09.634119   79871 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:42:09.634169   79871 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:42:09.634328   79871 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-885666 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:42:09.634400   79871 kubeadm.go:310] [bootstrap-token] Using token: 17ct2j.hazurgskaspe26qx
	I0814 17:42:09.635732   79871 out.go:204]   - Configuring RBAC rules ...
	I0814 17:42:09.635859   79871 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:42:09.635990   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:42:09.636141   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:42:09.636250   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:42:09.636347   79871 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:42:09.636485   79871 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:42:09.636657   79871 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:42:09.636708   79871 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:42:09.636747   79871 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:42:09.636753   79871 kubeadm.go:310] 
	I0814 17:42:09.636813   79871 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:42:09.636835   79871 kubeadm.go:310] 
	I0814 17:42:09.636972   79871 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:42:09.636995   79871 kubeadm.go:310] 
	I0814 17:42:09.637029   79871 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:42:09.637120   79871 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:42:09.637185   79871 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:42:09.637195   79871 kubeadm.go:310] 
	I0814 17:42:09.637267   79871 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:42:09.637277   79871 kubeadm.go:310] 
	I0814 17:42:09.637315   79871 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:42:09.637321   79871 kubeadm.go:310] 
	I0814 17:42:09.637384   79871 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:42:09.637461   79871 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:42:09.637523   79871 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:42:09.637529   79871 kubeadm.go:310] 
	I0814 17:42:09.637623   79871 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:42:09.637691   79871 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:42:09.637703   79871 kubeadm.go:310] 
	I0814 17:42:09.637779   79871 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 17ct2j.hazurgskaspe26qx \
	I0814 17:42:09.637866   79871 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:42:09.637890   79871 kubeadm.go:310] 	--control-plane 
	I0814 17:42:09.637899   79871 kubeadm.go:310] 
	I0814 17:42:09.638010   79871 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:42:09.638020   79871 kubeadm.go:310] 
	I0814 17:42:09.638098   79871 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 17ct2j.hazurgskaspe26qx \
	I0814 17:42:09.638211   79871 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:42:09.638234   79871 cni.go:84] Creating CNI manager for ""
	I0814 17:42:09.638246   79871 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:42:09.639748   79871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:42:09.641031   79871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:42:09.652173   79871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:42:09.670482   79871 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:42:09.670582   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:09.670582   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-885666 minikube.k8s.io/updated_at=2024_08_14T17_42_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=default-k8s-diff-port-885666 minikube.k8s.io/primary=true
	I0814 17:42:09.703097   79871 ops.go:34] apiserver oom_adj: -16
	I0814 17:42:09.881340   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:10.381470   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:07.516539   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:10.015456   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:10.882013   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:11.382239   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:11.881638   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:12.381703   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:12.881401   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:13.381524   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:13.881402   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:14.381696   79871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:14.498441   79871 kubeadm.go:1113] duration metric: took 4.827929439s to wait for elevateKubeSystemPrivileges
	I0814 17:42:14.498474   79871 kubeadm.go:394] duration metric: took 4m59.336328921s to StartCluster
	I0814 17:42:14.498493   79871 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:42:14.498581   79871 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:42:14.501029   79871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:42:14.501309   79871 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.184 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:42:14.501432   79871 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:42:14.501508   79871 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501541   79871 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.501550   79871 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:42:14.501577   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.501590   79871 config.go:182] Loaded profile config "default-k8s-diff-port-885666": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:42:14.501619   79871 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501667   79871 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.501677   79871 addons.go:243] addon metrics-server should already be in state true
	I0814 17:42:14.501716   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.501593   79871 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-885666"
	I0814 17:42:14.501840   79871 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-885666"
	I0814 17:42:14.502106   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502142   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502160   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502174   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502176   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.502199   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.502371   79871 out.go:177] * Verifying Kubernetes components...
	I0814 17:42:14.504085   79871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:42:14.519401   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0814 17:42:14.519631   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0814 17:42:14.520085   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.520196   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.520701   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.520722   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.520789   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0814 17:42:14.520978   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.520994   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.521255   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.521519   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.521524   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.521703   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.522021   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.522051   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.522548   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.522572   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.522864   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.523507   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.523550   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.525737   79871 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-885666"
	W0814 17:42:14.525759   79871 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:42:14.525789   79871 host.go:66] Checking if "default-k8s-diff-port-885666" exists ...
	I0814 17:42:14.526144   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.526170   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.538930   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I0814 17:42:14.538995   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42207
	I0814 17:42:14.539567   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.539594   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.540125   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.540138   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.540266   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.540289   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.540624   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.540770   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.540825   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.540970   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.542540   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.542848   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.544439   79871 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:42:14.544444   79871 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:42:14.544881   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
	I0814 17:42:14.545315   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.545575   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:42:14.545586   79871 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:42:14.545601   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.545672   79871 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:42:14.545691   79871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:42:14.545708   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.545750   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.545759   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.546339   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.547167   79871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:42:14.547290   79871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:42:14.549794   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.549829   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550300   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.550324   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550355   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.550423   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.550637   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.550707   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.550965   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.551025   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.551119   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.551168   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.551302   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.551658   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.567203   79871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37661
	I0814 17:42:14.567613   79871 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:42:14.568141   79871 main.go:141] libmachine: Using API Version  1
	I0814 17:42:14.568167   79871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:42:14.568484   79871 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:42:14.568678   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetState
	I0814 17:42:14.570337   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .DriverName
	I0814 17:42:14.570867   79871 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:42:14.570888   79871 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:42:14.570906   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHHostname
	I0814 17:42:14.574091   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.574562   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:cc:3c", ip: ""} in network mk-default-k8s-diff-port-885666: {Iface:virbr3 ExpiryTime:2024-08-14 18:36:58 +0000 UTC Type:0 Mac:52:54:00:f8:cc:3c Iaid: IPaddr:192.168.50.184 Prefix:24 Hostname:default-k8s-diff-port-885666 Clientid:01:52:54:00:f8:cc:3c}
	I0814 17:42:14.574586   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | domain default-k8s-diff-port-885666 has defined IP address 192.168.50.184 and MAC address 52:54:00:f8:cc:3c in network mk-default-k8s-diff-port-885666
	I0814 17:42:14.574667   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHPort
	I0814 17:42:14.574857   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHKeyPath
	I0814 17:42:14.575039   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .GetSSHUsername
	I0814 17:42:14.575197   79871 sshutil.go:53] new ssh client: &{IP:192.168.50.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/default-k8s-diff-port-885666/id_rsa Username:docker}
	I0814 17:42:14.675594   79871 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:42:14.694520   79871 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-885666" to be "Ready" ...
	I0814 17:42:14.702650   79871 node_ready.go:49] node "default-k8s-diff-port-885666" has status "Ready":"True"
	I0814 17:42:14.702672   79871 node_ready.go:38] duration metric: took 8.119351ms for node "default-k8s-diff-port-885666" to be "Ready" ...
	I0814 17:42:14.702684   79871 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:14.707535   79871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:14.762686   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:42:14.805275   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:42:14.837118   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:42:14.837143   79871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:42:14.881848   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:42:14.881872   79871 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:42:14.902731   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.902762   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.903058   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.903076   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.903092   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.903111   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.903448   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:14.903484   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.903493   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.908980   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:14.908995   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:14.909239   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:14.909310   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:14.909336   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:14.920224   79871 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:42:14.920249   79871 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:42:14.955256   79871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:42:15.297167   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.297190   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.297544   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:15.297602   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.297631   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.297649   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.297659   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.297865   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.297885   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.842348   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.842376   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.842688   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.842707   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.842716   79871 main.go:141] libmachine: Making call to close driver server
	I0814 17:42:15.842738   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) DBG | Closing plugin on server side
	I0814 17:42:15.842805   79871 main.go:141] libmachine: (default-k8s-diff-port-885666) Calling .Close
	I0814 17:42:15.843057   79871 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:42:15.843070   79871 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:42:15.843081   79871 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-885666"
	I0814 17:42:15.844747   79871 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0814 17:42:12.513055   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:14.514298   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:15.845895   79871 addons.go:510] duration metric: took 1.344461878s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0814 17:42:16.714014   79871 pod_ready.go:102] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:18.715243   79871 pod_ready.go:102] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:17.013231   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:19.013966   79367 pod_ready.go:102] pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace has status "Ready":"False"
	I0814 17:42:20.507978   79367 pod_ready.go:81] duration metric: took 4m0.001138158s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" ...
	E0814 17:42:20.508026   79367 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8c2cx" in "kube-system" namespace to be "Ready" (will not retry!)
	I0814 17:42:20.508048   79367 pod_ready.go:38] duration metric: took 4m6.305785273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:20.508081   79367 kubeadm.go:597] duration metric: took 4m13.455842043s to restartPrimaryControlPlane
	W0814 17:42:20.508145   79367 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0814 17:42:20.508186   79367 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:42:20.714660   79871 pod_ready.go:92] pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.714687   79871 pod_ready.go:81] duration metric: took 6.007129076s for pod "coredns-6f6b679f8f-k5qnj" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.714696   79871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.719517   79871 pod_ready.go:92] pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.719542   79871 pod_ready.go:81] duration metric: took 4.838754ms for pod "coredns-6f6b679f8f-nm28w" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.719554   79871 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.724787   79871 pod_ready.go:92] pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:20.724816   79871 pod_ready.go:81] duration metric: took 5.250194ms for pod "etcd-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:20.724834   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.731431   79871 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.731456   79871 pod_ready.go:81] duration metric: took 1.00661383s for pod "kube-apiserver-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.731468   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.736442   79871 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.736467   79871 pod_ready.go:81] duration metric: took 4.989787ms for pod "kube-controller-manager-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.736480   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-254cb" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.911495   79871 pod_ready.go:92] pod "kube-proxy-254cb" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:21.911520   79871 pod_ready.go:81] duration metric: took 175.03218ms for pod "kube-proxy-254cb" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:21.911529   79871 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:22.311700   79871 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace has status "Ready":"True"
	I0814 17:42:22.311730   79871 pod_ready.go:81] duration metric: took 400.194781ms for pod "kube-scheduler-default-k8s-diff-port-885666" in "kube-system" namespace to be "Ready" ...
	I0814 17:42:22.311739   79871 pod_ready.go:38] duration metric: took 7.609043377s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:42:22.311752   79871 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:42:22.311800   79871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:42:22.326995   79871 api_server.go:72] duration metric: took 7.825649112s to wait for apiserver process to appear ...
	I0814 17:42:22.327018   79871 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:42:22.327036   79871 api_server.go:253] Checking apiserver healthz at https://192.168.50.184:8444/healthz ...
	I0814 17:42:22.331069   79871 api_server.go:279] https://192.168.50.184:8444/healthz returned 200:
	ok
	I0814 17:42:22.332077   79871 api_server.go:141] control plane version: v1.31.0
	I0814 17:42:22.332096   79871 api_server.go:131] duration metric: took 5.0724ms to wait for apiserver health ...
	I0814 17:42:22.332103   79871 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:42:22.514565   79871 system_pods.go:59] 9 kube-system pods found
	I0814 17:42:22.514595   79871 system_pods.go:61] "coredns-6f6b679f8f-k5qnj" [cf05f7e2-29de-4437-b182-53cd65350631] Running
	I0814 17:42:22.514601   79871 system_pods.go:61] "coredns-6f6b679f8f-nm28w" [ba1fe4d0-1869-49ec-a281-18119a2ad26b] Running
	I0814 17:42:22.514606   79871 system_pods.go:61] "etcd-default-k8s-diff-port-885666" [62581194-9ace-41f9-ba0d-0df04b7dca41] Running
	I0814 17:42:22.514610   79871 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-885666" [ea586a7b-5ca4-48d7-8be3-c13770e0cb40] Running
	I0814 17:42:22.514614   79871 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-885666" [9610bcca-feef-45f2-8b36-a6e96d364e18] Running
	I0814 17:42:22.514617   79871 system_pods.go:61] "kube-proxy-254cb" [e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb] Running
	I0814 17:42:22.514620   79871 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-885666" [872997ac-b438-4be5-b187-af171228770c] Running
	I0814 17:42:22.514626   79871 system_pods.go:61] "metrics-server-6867b74b74-5q86r" [849df692-9f8e-455e-b209-25801151513b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:42:22.514631   79871 system_pods.go:61] "storage-provisioner" [5128eea6-234c-4aea-a9b7-835e840a31a3] Running
	I0814 17:42:22.514639   79871 system_pods.go:74] duration metric: took 182.531543ms to wait for pod list to return data ...
	I0814 17:42:22.514647   79871 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:42:22.713101   79871 default_sa.go:45] found service account: "default"
	I0814 17:42:22.713125   79871 default_sa.go:55] duration metric: took 198.471814ms for default service account to be created ...
	I0814 17:42:22.713136   79871 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:42:22.914576   79871 system_pods.go:86] 9 kube-system pods found
	I0814 17:42:22.914619   79871 system_pods.go:89] "coredns-6f6b679f8f-k5qnj" [cf05f7e2-29de-4437-b182-53cd65350631] Running
	I0814 17:42:22.914628   79871 system_pods.go:89] "coredns-6f6b679f8f-nm28w" [ba1fe4d0-1869-49ec-a281-18119a2ad26b] Running
	I0814 17:42:22.914635   79871 system_pods.go:89] "etcd-default-k8s-diff-port-885666" [62581194-9ace-41f9-ba0d-0df04b7dca41] Running
	I0814 17:42:22.914643   79871 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-885666" [ea586a7b-5ca4-48d7-8be3-c13770e0cb40] Running
	I0814 17:42:22.914650   79871 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-885666" [9610bcca-feef-45f2-8b36-a6e96d364e18] Running
	I0814 17:42:22.914657   79871 system_pods.go:89] "kube-proxy-254cb" [e42cc8ca-2adc-4597-b9ca-ec4d32fc7dbb] Running
	I0814 17:42:22.914665   79871 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-885666" [872997ac-b438-4be5-b187-af171228770c] Running
	I0814 17:42:22.914678   79871 system_pods.go:89] "metrics-server-6867b74b74-5q86r" [849df692-9f8e-455e-b209-25801151513b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:42:22.914689   79871 system_pods.go:89] "storage-provisioner" [5128eea6-234c-4aea-a9b7-835e840a31a3] Running
	I0814 17:42:22.914705   79871 system_pods.go:126] duration metric: took 201.563199ms to wait for k8s-apps to be running ...
	I0814 17:42:22.914716   79871 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:42:22.914768   79871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:22.928499   79871 system_svc.go:56] duration metric: took 13.774119ms WaitForService to wait for kubelet
	I0814 17:42:22.928525   79871 kubeadm.go:582] duration metric: took 8.427183796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:42:22.928543   79871 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:42:23.112363   79871 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:42:23.112398   79871 node_conditions.go:123] node cpu capacity is 2
	I0814 17:42:23.112410   79871 node_conditions.go:105] duration metric: took 183.861382ms to run NodePressure ...
	I0814 17:42:23.112423   79871 start.go:241] waiting for startup goroutines ...
	I0814 17:42:23.112432   79871 start.go:246] waiting for cluster config update ...
	I0814 17:42:23.112446   79871 start.go:255] writing updated cluster config ...
	I0814 17:42:23.112792   79871 ssh_runner.go:195] Run: rm -f paused
	I0814 17:42:23.162700   79871 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:42:23.164689   79871 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-885666" cluster and "default" namespace by default
	I0814 17:42:28.263217   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:42:28.263629   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:28.263853   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:33.264169   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:33.264403   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:43.264648   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:42:43.264858   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:42:46.859569   79367 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.351355314s)
	I0814 17:42:46.859653   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:42:46.875530   79367 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0814 17:42:46.884772   79367 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:42:46.894185   79367 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:42:46.894208   79367 kubeadm.go:157] found existing configuration files:
	
	I0814 17:42:46.894258   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:42:46.903690   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:42:46.903748   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:42:46.913277   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:42:46.922120   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:42:46.922173   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:42:46.931143   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:42:46.939936   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:42:46.939997   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:42:46.949257   79367 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:42:46.958109   79367 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:42:46.958169   79367 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:42:46.967609   79367 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:42:47.010119   79367 kubeadm.go:310] W0814 17:42:46.983769    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:47.010889   79367 kubeadm.go:310] W0814 17:42:46.984558    3057 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0814 17:42:47.122746   79367 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:42:55.571963   79367 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0814 17:42:55.572017   79367 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:42:55.572127   79367 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:42:55.572236   79367 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:42:55.572314   79367 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0814 17:42:55.572385   79367 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:42:55.574178   79367 out.go:204]   - Generating certificates and keys ...
	I0814 17:42:55.574288   79367 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:42:55.574372   79367 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:42:55.574485   79367 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:42:55.574573   79367 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:42:55.574669   79367 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:42:55.574740   79367 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:42:55.574811   79367 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:42:55.574909   79367 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:42:55.575014   79367 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:42:55.575135   79367 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:42:55.575187   79367 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:42:55.575238   79367 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:42:55.575288   79367 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:42:55.575359   79367 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0814 17:42:55.575438   79367 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:42:55.575521   79367 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:42:55.575608   79367 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:42:55.575759   79367 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:42:55.575869   79367 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:42:55.577331   79367 out.go:204]   - Booting up control plane ...
	I0814 17:42:55.577429   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:42:55.577511   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:42:55.577587   79367 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:42:55.577773   79367 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:42:55.577908   79367 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:42:55.577968   79367 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:42:55.578152   79367 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0814 17:42:55.578301   79367 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0814 17:42:55.578368   79367 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.938552ms
	I0814 17:42:55.578428   79367 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0814 17:42:55.578480   79367 kubeadm.go:310] [api-check] The API server is healthy after 5.00239154s
	I0814 17:42:55.578605   79367 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0814 17:42:55.578777   79367 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0814 17:42:55.578863   79367 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0814 17:42:55.579025   79367 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-545149 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0814 17:42:55.579100   79367 kubeadm.go:310] [bootstrap-token] Using token: qzd0yh.k8a8j7f6vmqndeav
	I0814 17:42:55.580318   79367 out.go:204]   - Configuring RBAC rules ...
	I0814 17:42:55.580429   79367 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0814 17:42:55.580503   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0814 17:42:55.580683   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0814 17:42:55.580839   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0814 17:42:55.580935   79367 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0814 17:42:55.581047   79367 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0814 17:42:55.581197   79367 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0814 17:42:55.581235   79367 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0814 17:42:55.581279   79367 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0814 17:42:55.581285   79367 kubeadm.go:310] 
	I0814 17:42:55.581339   79367 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0814 17:42:55.581355   79367 kubeadm.go:310] 
	I0814 17:42:55.581470   79367 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0814 17:42:55.581480   79367 kubeadm.go:310] 
	I0814 17:42:55.581519   79367 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0814 17:42:55.581586   79367 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0814 17:42:55.581654   79367 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0814 17:42:55.581663   79367 kubeadm.go:310] 
	I0814 17:42:55.581749   79367 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0814 17:42:55.581757   79367 kubeadm.go:310] 
	I0814 17:42:55.581798   79367 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0814 17:42:55.581804   79367 kubeadm.go:310] 
	I0814 17:42:55.581857   79367 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0814 17:42:55.581944   79367 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0814 17:42:55.582007   79367 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0814 17:42:55.582014   79367 kubeadm.go:310] 
	I0814 17:42:55.582085   79367 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0814 17:42:55.582148   79367 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0814 17:42:55.582154   79367 kubeadm.go:310] 
	I0814 17:42:55.582221   79367 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qzd0yh.k8a8j7f6vmqndeav \
	I0814 17:42:55.582313   79367 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 \
	I0814 17:42:55.582333   79367 kubeadm.go:310] 	--control-plane 
	I0814 17:42:55.582336   79367 kubeadm.go:310] 
	I0814 17:42:55.582426   79367 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0814 17:42:55.582434   79367 kubeadm.go:310] 
	I0814 17:42:55.582518   79367 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qzd0yh.k8a8j7f6vmqndeav \
	I0814 17:42:55.582678   79367 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:33648dfb1374a8154603fa790aed15b51b07f40a9f1ffc7dafbd579d5fe1c629 
	I0814 17:42:55.582691   79367 cni.go:84] Creating CNI manager for ""
	I0814 17:42:55.582697   79367 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 17:42:55.584337   79367 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0814 17:42:55.585493   79367 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0814 17:42:55.596201   79367 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0814 17:42:55.617012   79367 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0814 17:42:55.617115   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:55.617152   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-545149 minikube.k8s.io/updated_at=2024_08_14T17_42_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a63cabd83e66a3efc9084b0ad9541aeb5353ef35 minikube.k8s.io/name=no-preload-545149 minikube.k8s.io/primary=true
	I0814 17:42:55.794262   79367 ops.go:34] apiserver oom_adj: -16
	I0814 17:42:55.794421   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:56.294450   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:56.795280   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:57.294604   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:57.794700   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:58.294863   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:58.795404   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:59.295066   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:42:59.794529   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:43:00.294720   79367 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0814 17:43:00.409254   79367 kubeadm.go:1113] duration metric: took 4.79220609s to wait for elevateKubeSystemPrivileges
	I0814 17:43:00.409300   79367 kubeadm.go:394] duration metric: took 4m53.401266889s to StartCluster
	I0814 17:43:00.409323   79367 settings.go:142] acquiring lock: {Name:mk7710c7ae55b9e20553d6ca809f330a3f1954bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:43:00.409419   79367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:43:00.411076   79367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19446-13977/kubeconfig: {Name:mk705afa05675caf65e46b5396269ee5654c7715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0814 17:43:00.411313   79367 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0814 17:43:00.411438   79367 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0814 17:43:00.411521   79367 addons.go:69] Setting storage-provisioner=true in profile "no-preload-545149"
	I0814 17:43:00.411529   79367 addons.go:69] Setting default-storageclass=true in profile "no-preload-545149"
	I0814 17:43:00.411552   79367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-545149"
	I0814 17:43:00.411554   79367 addons.go:234] Setting addon storage-provisioner=true in "no-preload-545149"
	I0814 17:43:00.411564   79367 config.go:182] Loaded profile config "no-preload-545149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:43:00.411568   79367 addons.go:69] Setting metrics-server=true in profile "no-preload-545149"
	W0814 17:43:00.411566   79367 addons.go:243] addon storage-provisioner should already be in state true
	I0814 17:43:00.411601   79367 addons.go:234] Setting addon metrics-server=true in "no-preload-545149"
	W0814 17:43:00.411612   79367 addons.go:243] addon metrics-server should already be in state true
	I0814 17:43:00.411637   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.411646   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.411922   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.411954   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412019   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.412053   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412076   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.412103   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.412914   79367 out.go:177] * Verifying Kubernetes components...
	I0814 17:43:00.414471   79367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0814 17:43:00.427965   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0814 17:43:00.427966   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41043
	I0814 17:43:00.428460   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.428608   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.428985   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.429004   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.429130   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.429147   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.429206   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0814 17:43:00.429346   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.429443   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.429498   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.429543   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.430131   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.430152   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.430435   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.430446   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.430718   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.431238   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.431270   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.433273   79367 addons.go:234] Setting addon default-storageclass=true in "no-preload-545149"
	W0814 17:43:00.433292   79367 addons.go:243] addon default-storageclass should already be in state true
	I0814 17:43:00.433319   79367 host.go:66] Checking if "no-preload-545149" exists ...
	I0814 17:43:00.433551   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.433581   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.450138   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I0814 17:43:00.450327   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
	I0814 17:43:00.450697   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.450818   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.451527   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.451547   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.451695   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.451706   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.451958   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.452224   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.452283   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.453110   79367 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 17:43:00.453141   79367 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 17:43:00.453937   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.455467   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36879
	I0814 17:43:00.455825   79367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0814 17:43:00.455934   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.456456   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.456479   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.456964   79367 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:43:00.456981   79367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0814 17:43:00.456999   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.457000   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.457144   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.459264   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.460208   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.460606   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.460636   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.460750   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.460858   79367 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0814 17:43:00.460989   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.461150   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.461281   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.462118   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0814 17:43:00.462138   79367 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0814 17:43:00.462156   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.465200   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.465643   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.465710   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.465829   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.466004   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.466165   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.466312   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.478054   79367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34147
	I0814 17:43:00.478616   79367 main.go:141] libmachine: () Calling .GetVersion
	I0814 17:43:00.479176   79367 main.go:141] libmachine: Using API Version  1
	I0814 17:43:00.479198   79367 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 17:43:00.479536   79367 main.go:141] libmachine: () Calling .GetMachineName
	I0814 17:43:00.479725   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetState
	I0814 17:43:00.481351   79367 main.go:141] libmachine: (no-preload-545149) Calling .DriverName
	I0814 17:43:00.481574   79367 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0814 17:43:00.481588   79367 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0814 17:43:00.481606   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHHostname
	I0814 17:43:00.484454   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.484738   79367 main.go:141] libmachine: (no-preload-545149) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:bd:d7", ip: ""} in network mk-no-preload-545149: {Iface:virbr1 ExpiryTime:2024-08-14 18:37:40 +0000 UTC Type:0 Mac:52:54:00:d0:bd:d7 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-545149 Clientid:01:52:54:00:d0:bd:d7}
	I0814 17:43:00.484771   79367 main.go:141] libmachine: (no-preload-545149) DBG | domain no-preload-545149 has defined IP address 192.168.39.162 and MAC address 52:54:00:d0:bd:d7 in network mk-no-preload-545149
	I0814 17:43:00.484989   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHPort
	I0814 17:43:00.485222   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHKeyPath
	I0814 17:43:00.485370   79367 main.go:141] libmachine: (no-preload-545149) Calling .GetSSHUsername
	I0814 17:43:00.485485   79367 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/no-preload-545149/id_rsa Username:docker}
	I0814 17:43:00.617562   79367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0814 17:43:00.665134   79367 node_ready.go:35] waiting up to 6m0s for node "no-preload-545149" to be "Ready" ...
	I0814 17:43:00.673659   79367 node_ready.go:49] node "no-preload-545149" has status "Ready":"True"
	I0814 17:43:00.673680   79367 node_ready.go:38] duration metric: took 8.508683ms for node "no-preload-545149" to be "Ready" ...
	I0814 17:43:00.673689   79367 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:43:00.680313   79367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:00.810401   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0814 17:43:00.827621   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0814 17:43:00.871727   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0814 17:43:00.871752   79367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0814 17:43:00.969061   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0814 17:43:00.969088   79367 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0814 17:43:01.103808   79367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:43:01.103839   79367 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0814 17:43:01.198160   79367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0814 17:43:01.880623   79367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.052957744s)
	I0814 17:43:01.880683   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.880697   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.880749   79367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070305568s)
	I0814 17:43:01.880785   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.880804   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881075   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881093   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881103   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.881115   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881248   79367 main.go:141] libmachine: (no-preload-545149) DBG | Closing plugin on server side
	I0814 17:43:01.881284   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881312   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881336   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.881375   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.881385   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881396   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.881682   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.881703   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:01.896050   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:01.896076   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:01.896351   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:01.896370   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.131404   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:02.131427   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:02.131744   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:02.131768   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.131780   79367 main.go:141] libmachine: Making call to close driver server
	I0814 17:43:02.131788   79367 main.go:141] libmachine: (no-preload-545149) Calling .Close
	I0814 17:43:02.132004   79367 main.go:141] libmachine: Successfully made call to close driver server
	I0814 17:43:02.132026   79367 main.go:141] libmachine: Making call to close connection to plugin binary
	I0814 17:43:02.132042   79367 addons.go:475] Verifying addon metrics-server=true in "no-preload-545149"
	I0814 17:43:02.133699   79367 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0814 17:43:03.265508   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:03.265720   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:02.135365   79367 addons.go:510] duration metric: took 1.72392081s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0814 17:43:02.687160   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:05.186062   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:07.187193   79367 pod_ready.go:102] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"False"
	I0814 17:43:09.188957   79367 pod_ready.go:92] pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.188990   79367 pod_ready.go:81] duration metric: took 8.508650006s for pod "coredns-6f6b679f8f-h4dmc" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.189003   79367 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.194469   79367 pod_ready.go:92] pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.194492   79367 pod_ready.go:81] duration metric: took 5.48133ms for pod "coredns-6f6b679f8f-mpfqf" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.194501   79367 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.199127   79367 pod_ready.go:92] pod "etcd-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.199150   79367 pod_ready.go:81] duration metric: took 4.643296ms for pod "etcd-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.199159   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.203804   79367 pod_ready.go:92] pod "kube-apiserver-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.203825   79367 pod_ready.go:81] duration metric: took 4.659513ms for pod "kube-apiserver-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.203837   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.208443   79367 pod_ready.go:92] pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.208465   79367 pod_ready.go:81] duration metric: took 4.620634ms for pod "kube-controller-manager-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.208479   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6bps" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.584443   79367 pod_ready.go:92] pod "kube-proxy-s6bps" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.584471   79367 pod_ready.go:81] duration metric: took 375.985094ms for pod "kube-proxy-s6bps" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.584481   79367 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.985476   79367 pod_ready.go:92] pod "kube-scheduler-no-preload-545149" in "kube-system" namespace has status "Ready":"True"
	I0814 17:43:09.985504   79367 pod_ready.go:81] duration metric: took 401.014791ms for pod "kube-scheduler-no-preload-545149" in "kube-system" namespace to be "Ready" ...
	I0814 17:43:09.985515   79367 pod_ready.go:38] duration metric: took 9.311816641s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0814 17:43:09.985534   79367 api_server.go:52] waiting for apiserver process to appear ...
	I0814 17:43:09.985603   79367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 17:43:10.002239   79367 api_server.go:72] duration metric: took 9.590875358s to wait for apiserver process to appear ...
	I0814 17:43:10.002276   79367 api_server.go:88] waiting for apiserver healthz status ...
	I0814 17:43:10.002304   79367 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I0814 17:43:10.009410   79367 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I0814 17:43:10.010351   79367 api_server.go:141] control plane version: v1.31.0
	I0814 17:43:10.010381   79367 api_server.go:131] duration metric: took 8.098086ms to wait for apiserver health ...
	I0814 17:43:10.010389   79367 system_pods.go:43] waiting for kube-system pods to appear ...
	I0814 17:43:10.189597   79367 system_pods.go:59] 9 kube-system pods found
	I0814 17:43:10.189629   79367 system_pods.go:61] "coredns-6f6b679f8f-h4dmc" [33f2fdca-15ba-430f-989f-3c569f33a76a] Running
	I0814 17:43:10.189634   79367 system_pods.go:61] "coredns-6f6b679f8f-mpfqf" [7b0e3bf4-41d9-4151-8255-37881e596c20] Running
	I0814 17:43:10.189638   79367 system_pods.go:61] "etcd-no-preload-545149" [5fc2782c-a4c3-46d6-b2d2-3c9325f17ae4] Running
	I0814 17:43:10.189642   79367 system_pods.go:61] "kube-apiserver-no-preload-545149" [3cdde3b9-70b4-4e5e-bc48-ab207c903437] Running
	I0814 17:43:10.189646   79367 system_pods.go:61] "kube-controller-manager-no-preload-545149" [c8f222c9-95a1-4acf-9ca3-068832ed808f] Running
	I0814 17:43:10.189649   79367 system_pods.go:61] "kube-proxy-s6bps" [9165c654-568f-4206-878c-f0c88ccd38cd] Running
	I0814 17:43:10.189652   79367 system_pods.go:61] "kube-scheduler-no-preload-545149" [423d82b6-cb92-408b-a5d6-95305c91400c] Running
	I0814 17:43:10.189658   79367 system_pods.go:61] "metrics-server-6867b74b74-7qljd" [0f0e5d07-eb28-46b3-9270-554006151eda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:43:10.189662   79367 system_pods.go:61] "storage-provisioner" [bc80ba99-eecf-4eb1-bd78-f88792cb3e5a] Running
	I0814 17:43:10.189670   79367 system_pods.go:74] duration metric: took 179.275641ms to wait for pod list to return data ...
	I0814 17:43:10.189678   79367 default_sa.go:34] waiting for default service account to be created ...
	I0814 17:43:10.385690   79367 default_sa.go:45] found service account: "default"
	I0814 17:43:10.385715   79367 default_sa.go:55] duration metric: took 196.030333ms for default service account to be created ...
	I0814 17:43:10.385723   79367 system_pods.go:116] waiting for k8s-apps to be running ...
	I0814 17:43:10.590237   79367 system_pods.go:86] 9 kube-system pods found
	I0814 17:43:10.590272   79367 system_pods.go:89] "coredns-6f6b679f8f-h4dmc" [33f2fdca-15ba-430f-989f-3c569f33a76a] Running
	I0814 17:43:10.590279   79367 system_pods.go:89] "coredns-6f6b679f8f-mpfqf" [7b0e3bf4-41d9-4151-8255-37881e596c20] Running
	I0814 17:43:10.590285   79367 system_pods.go:89] "etcd-no-preload-545149" [5fc2782c-a4c3-46d6-b2d2-3c9325f17ae4] Running
	I0814 17:43:10.590291   79367 system_pods.go:89] "kube-apiserver-no-preload-545149" [3cdde3b9-70b4-4e5e-bc48-ab207c903437] Running
	I0814 17:43:10.590299   79367 system_pods.go:89] "kube-controller-manager-no-preload-545149" [c8f222c9-95a1-4acf-9ca3-068832ed808f] Running
	I0814 17:43:10.590306   79367 system_pods.go:89] "kube-proxy-s6bps" [9165c654-568f-4206-878c-f0c88ccd38cd] Running
	I0814 17:43:10.590312   79367 system_pods.go:89] "kube-scheduler-no-preload-545149" [423d82b6-cb92-408b-a5d6-95305c91400c] Running
	I0814 17:43:10.590322   79367 system_pods.go:89] "metrics-server-6867b74b74-7qljd" [0f0e5d07-eb28-46b3-9270-554006151eda] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0814 17:43:10.590335   79367 system_pods.go:89] "storage-provisioner" [bc80ba99-eecf-4eb1-bd78-f88792cb3e5a] Running
	I0814 17:43:10.590351   79367 system_pods.go:126] duration metric: took 204.620982ms to wait for k8s-apps to be running ...
	I0814 17:43:10.590364   79367 system_svc.go:44] waiting for kubelet service to be running ....
	I0814 17:43:10.590418   79367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:10.605594   79367 system_svc.go:56] duration metric: took 15.223089ms WaitForService to wait for kubelet
	I0814 17:43:10.605624   79367 kubeadm.go:582] duration metric: took 10.194267533s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0814 17:43:10.605644   79367 node_conditions.go:102] verifying NodePressure condition ...
	I0814 17:43:10.786127   79367 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0814 17:43:10.786160   79367 node_conditions.go:123] node cpu capacity is 2
	I0814 17:43:10.786173   79367 node_conditions.go:105] duration metric: took 180.522994ms to run NodePressure ...
	I0814 17:43:10.786187   79367 start.go:241] waiting for startup goroutines ...
	I0814 17:43:10.786197   79367 start.go:246] waiting for cluster config update ...
	I0814 17:43:10.786210   79367 start.go:255] writing updated cluster config ...
	I0814 17:43:10.786498   79367 ssh_runner.go:195] Run: rm -f paused
	I0814 17:43:10.834139   79367 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0814 17:43:10.836315   79367 out.go:177] * Done! kubectl is now configured to use "no-preload-545149" cluster and "default" namespace by default
	I0814 17:43:43.267316   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:43:43.267596   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:43:43.267623   80228 kubeadm.go:310] 
	I0814 17:43:43.267680   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:43:43.267757   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:43:43.267778   80228 kubeadm.go:310] 
	I0814 17:43:43.267839   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:43:43.267894   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:43:43.268029   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:43:43.268044   80228 kubeadm.go:310] 
	I0814 17:43:43.268190   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:43:43.268247   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:43:43.268296   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:43:43.268305   80228 kubeadm.go:310] 
	I0814 17:43:43.268446   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:43:43.268561   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:43:43.268572   80228 kubeadm.go:310] 
	I0814 17:43:43.268741   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:43:43.268907   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:43:43.269021   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:43:43.269120   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:43:43.269131   80228 kubeadm.go:310] 
	I0814 17:43:43.269560   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:43:43.269642   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:43:43.269698   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0814 17:43:43.269809   80228 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0814 17:43:43.269853   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0814 17:43:43.733975   80228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 17:43:43.748632   80228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0814 17:43:43.758474   80228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0814 17:43:43.758493   80228 kubeadm.go:157] found existing configuration files:
	
	I0814 17:43:43.758543   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0814 17:43:43.767721   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0814 17:43:43.767777   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0814 17:43:43.777259   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0814 17:43:43.786562   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0814 17:43:43.786623   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0814 17:43:43.795253   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.803677   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0814 17:43:43.803733   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0814 17:43:43.812416   80228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0814 17:43:43.821020   80228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0814 17:43:43.821075   80228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0814 17:43:43.829709   80228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0814 17:43:44.024836   80228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0814 17:45:40.060126   80228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0814 17:45:40.060266   80228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0814 17:45:40.061931   80228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0814 17:45:40.062003   80228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0814 17:45:40.062110   80228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0814 17:45:40.062231   80228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0814 17:45:40.062360   80228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0814 17:45:40.062459   80228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0814 17:45:40.063940   80228 out.go:204]   - Generating certificates and keys ...
	I0814 17:45:40.064041   80228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0814 17:45:40.064124   80228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0814 17:45:40.064230   80228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0814 17:45:40.064305   80228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0814 17:45:40.064398   80228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0814 17:45:40.064471   80228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0814 17:45:40.064550   80228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0814 17:45:40.064632   80228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0814 17:45:40.064712   80228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0814 17:45:40.064798   80228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0814 17:45:40.064857   80228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0814 17:45:40.064913   80228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0814 17:45:40.064956   80228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0814 17:45:40.065040   80228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0814 17:45:40.065146   80228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0814 17:45:40.065222   80228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0814 17:45:40.065366   80228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0814 17:45:40.065490   80228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0814 17:45:40.065547   80228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0814 17:45:40.065648   80228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0814 17:45:40.068108   80228 out.go:204]   - Booting up control plane ...
	I0814 17:45:40.068211   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0814 17:45:40.068294   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0814 17:45:40.068395   80228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0814 17:45:40.068522   80228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0814 17:45:40.068675   80228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0814 17:45:40.068751   80228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0814 17:45:40.068843   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069048   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069141   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069393   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069510   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069756   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.069823   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.069982   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070051   80228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0814 17:45:40.070204   80228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0814 17:45:40.070211   80228 kubeadm.go:310] 
	I0814 17:45:40.070244   80228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0814 17:45:40.070291   80228 kubeadm.go:310] 		timed out waiting for the condition
	I0814 17:45:40.070299   80228 kubeadm.go:310] 
	I0814 17:45:40.070337   80228 kubeadm.go:310] 	This error is likely caused by:
	I0814 17:45:40.070379   80228 kubeadm.go:310] 		- The kubelet is not running
	I0814 17:45:40.070504   80228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0814 17:45:40.070521   80228 kubeadm.go:310] 
	I0814 17:45:40.070673   80228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0814 17:45:40.070723   80228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0814 17:45:40.070764   80228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0814 17:45:40.070776   80228 kubeadm.go:310] 
	I0814 17:45:40.070876   80228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0814 17:45:40.070945   80228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0814 17:45:40.070953   80228 kubeadm.go:310] 
	I0814 17:45:40.071045   80228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0814 17:45:40.071151   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0814 17:45:40.071246   80228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0814 17:45:40.071363   80228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0814 17:45:40.071453   80228 kubeadm.go:310] 
	I0814 17:45:40.071481   80228 kubeadm.go:394] duration metric: took 8m2.599023024s to StartCluster
	I0814 17:45:40.071554   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0814 17:45:40.071617   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0814 17:45:40.115691   80228 cri.go:89] found id: ""
	I0814 17:45:40.115719   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.115727   80228 logs.go:278] No container was found matching "kube-apiserver"
	I0814 17:45:40.115734   80228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0814 17:45:40.115798   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0814 17:45:40.155537   80228 cri.go:89] found id: ""
	I0814 17:45:40.155566   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.155574   80228 logs.go:278] No container was found matching "etcd"
	I0814 17:45:40.155580   80228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0814 17:45:40.155645   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0814 17:45:40.189570   80228 cri.go:89] found id: ""
	I0814 17:45:40.189604   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.189616   80228 logs.go:278] No container was found matching "coredns"
	I0814 17:45:40.189625   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0814 17:45:40.189708   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0814 17:45:40.222496   80228 cri.go:89] found id: ""
	I0814 17:45:40.222521   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.222528   80228 logs.go:278] No container was found matching "kube-scheduler"
	I0814 17:45:40.222533   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0814 17:45:40.222590   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0814 17:45:40.255095   80228 cri.go:89] found id: ""
	I0814 17:45:40.255129   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.255142   80228 logs.go:278] No container was found matching "kube-proxy"
	I0814 17:45:40.255151   80228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0814 17:45:40.255233   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0814 17:45:40.290297   80228 cri.go:89] found id: ""
	I0814 17:45:40.290326   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.290341   80228 logs.go:278] No container was found matching "kube-controller-manager"
	I0814 17:45:40.290348   80228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0814 17:45:40.290396   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0814 17:45:40.326660   80228 cri.go:89] found id: ""
	I0814 17:45:40.326685   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.326695   80228 logs.go:278] No container was found matching "kindnet"
	I0814 17:45:40.326701   80228 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0814 17:45:40.326764   80228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0814 17:45:40.359867   80228 cri.go:89] found id: ""
	I0814 17:45:40.359896   80228 logs.go:276] 0 containers: []
	W0814 17:45:40.359908   80228 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0814 17:45:40.359918   80228 logs.go:123] Gathering logs for container status ...
	I0814 17:45:40.359933   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0814 17:45:40.397513   80228 logs.go:123] Gathering logs for kubelet ...
	I0814 17:45:40.397543   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0814 17:45:40.451744   80228 logs.go:123] Gathering logs for dmesg ...
	I0814 17:45:40.451778   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0814 17:45:40.475817   80228 logs.go:123] Gathering logs for describe nodes ...
	I0814 17:45:40.475843   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0814 17:45:40.575913   80228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0814 17:45:40.575933   80228 logs.go:123] Gathering logs for CRI-O ...
	I0814 17:45:40.575946   80228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0814 17:45:40.683455   80228 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0814 17:45:40.683509   80228 out.go:239] * 
	W0814 17:45:40.683587   80228 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.683623   80228 out.go:239] * 
	W0814 17:45:40.684431   80228 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0814 17:45:40.688043   80228 out.go:177] 
	W0814 17:45:40.689238   80228 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0814 17:45:40.689291   80228 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0814 17:45:40.689318   80228 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0814 17:45:40.690913   80228 out.go:177] 
	
	
	==> CRI-O <==
	Aug 14 17:57:02 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:02.976436012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658222976410085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d93f8d75-5b2e-41b7-adc3-cf5b89415e0f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:02 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:02.976946917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=713c8f7d-93dc-4aee-befd-8f0630870913 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:02 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:02.977005820Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=713c8f7d-93dc-4aee-befd-8f0630870913 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:02 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:02.977043100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=713c8f7d-93dc-4aee-befd-8f0630870913 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.006442641Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=826e77eb-1cd8-4061-baed-cb93308d1631 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.006517656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=826e77eb-1cd8-4061-baed-cb93308d1631 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.007326714Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71ac5c5f-dfc7-473a-a34a-53d33f892928 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.007744560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658223007725721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71ac5c5f-dfc7-473a-a34a-53d33f892928 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.008145303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5aa734d4-0ed8-42fc-8944-3ab6b2931618 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.008218119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5aa734d4-0ed8-42fc-8944-3ab6b2931618 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.008259759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5aa734d4-0ed8-42fc-8944-3ab6b2931618 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.038449739Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ac1ee3e-c396-41fc-b446-62eec6aeccd5 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.038551930Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ac1ee3e-c396-41fc-b446-62eec6aeccd5 name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.039959802Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e55f7e5b-f3bb-4fb4-ae6c-96cbbe113710 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.040360243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658223040338942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e55f7e5b-f3bb-4fb4-ae6c-96cbbe113710 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.040889464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=589d65f2-bf25-4fbf-9b5f-6e08b724e68c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.040959625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=589d65f2-bf25-4fbf-9b5f-6e08b724e68c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.041006518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=589d65f2-bf25-4fbf-9b5f-6e08b724e68c name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.071272391Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5fb02ef-6aaf-4b36-9d1f-a919548339fe name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.071357405Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5fb02ef-6aaf-4b36-9d1f-a919548339fe name=/runtime.v1.RuntimeService/Version
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.072397168Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b5ccb9a-7695-4e74-8a4e-6f6c2f298a78 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.072842420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723658223072816600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b5ccb9a-7695-4e74-8a4e-6f6c2f298a78 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.073472077Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98e27fcb-ba30-4767-b24c-626074ce7519 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.073529851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98e27fcb-ba30-4767-b24c-626074ce7519 name=/runtime.v1.RuntimeService/ListContainers
	Aug 14 17:57:03 old-k8s-version-505584 crio[648]: time="2024-08-14 17:57:03.073566126Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=98e27fcb-ba30-4767-b24c-626074ce7519 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug14 17:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051751] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038545] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.928700] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.931842] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.538149] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.402686] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.068532] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066584] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.214010] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.127681] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.254794] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.216784] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +0.064759] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.847232] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[ +11.985584] kauditd_printk_skb: 46 callbacks suppressed
	[Aug14 17:41] systemd-fstab-generator[5130]: Ignoring "noauto" option for root device
	[Aug14 17:43] systemd-fstab-generator[5418]: Ignoring "noauto" option for root device
	[  +0.067751] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 17:57:03 up 19 min,  0 users,  load average: 0.01, 0.05, 0.06
	Linux old-k8s-version-505584 5.10.207 #1 SMP Tue Aug 13 22:05:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]: goroutine 147 [select]:
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c01140, 0xc000b12600, 0xc000b10a80, 0xc000b10a20)
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]: created by net.(*netFD).connect
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]: goroutine 146 [select]:
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000337ea0, 0xc00083f901, 0xc000b12500, 0xc00097c550, 0xc000093480, 0xc000093440)
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc00083f920, 0x0, 0x0)
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0004b5dc0)
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 14 17:56:58 old-k8s-version-505584 kubelet[6915]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 14 17:56:58 old-k8s-version-505584 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 14 17:56:58 old-k8s-version-505584 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 14 17:56:59 old-k8s-version-505584 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 138.
	Aug 14 17:56:59 old-k8s-version-505584 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 14 17:56:59 old-k8s-version-505584 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 14 17:56:59 old-k8s-version-505584 kubelet[6924]: I0814 17:56:59.226528    6924 server.go:416] Version: v1.20.0
	Aug 14 17:56:59 old-k8s-version-505584 kubelet[6924]: I0814 17:56:59.226802    6924 server.go:837] Client rotation is on, will bootstrap in background
	Aug 14 17:56:59 old-k8s-version-505584 kubelet[6924]: I0814 17:56:59.229004    6924 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 14 17:56:59 old-k8s-version-505584 kubelet[6924]: W0814 17:56:59.230120    6924 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 14 17:56:59 old-k8s-version-505584 kubelet[6924]: I0814 17:56:59.230231    6924 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505584 -n old-k8s-version-505584
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 2 (225.174109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-505584" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (137.05s)

                                                
                                    

Test pass (252/318)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.39
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 13.82
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 111.51
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 176.2
31 TestAddons/serial/GCPAuth/Namespaces 0.14
33 TestAddons/parallel/Registry 18.09
35 TestAddons/parallel/InspektorGadget 11.4
37 TestAddons/parallel/HelmTiller 11.55
39 TestAddons/parallel/CSI 60.67
40 TestAddons/parallel/Headlamp 17.6
41 TestAddons/parallel/CloudSpanner 5.57
42 TestAddons/parallel/LocalPath 62.05
43 TestAddons/parallel/NvidiaDevicePlugin 5.47
44 TestAddons/parallel/Yakd 10.72
46 TestCertOptions 89.25
47 TestCertExpiration 334.76
49 TestForceSystemdFlag 78.02
50 TestForceSystemdEnv 45.75
52 TestKVMDriverInstallOrUpdate 4.29
56 TestErrorSpam/setup 39.63
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.71
59 TestErrorSpam/pause 1.47
60 TestErrorSpam/unpause 1.69
61 TestErrorSpam/stop 4.87
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 53.94
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 41.44
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.2
73 TestFunctional/serial/CacheCmd/cache/add_local 2.07
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 31.36
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.3
84 TestFunctional/serial/LogsFileCmd 1.33
85 TestFunctional/serial/InvalidService 4.13
87 TestFunctional/parallel/ConfigCmd 0.31
88 TestFunctional/parallel/DashboardCmd 19.59
89 TestFunctional/parallel/DryRun 0.28
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 0.84
95 TestFunctional/parallel/ServiceCmdConnect 11.4
96 TestFunctional/parallel/AddonsCmd 0.11
97 TestFunctional/parallel/PersistentVolumeClaim 43.08
99 TestFunctional/parallel/SSHCmd 0.38
100 TestFunctional/parallel/CpCmd 1.18
101 TestFunctional/parallel/MySQL 25.48
102 TestFunctional/parallel/FileSync 0.24
103 TestFunctional/parallel/CertSync 1.17
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
111 TestFunctional/parallel/License 0.58
112 TestFunctional/parallel/ServiceCmd/DeployApp 21.23
113 TestFunctional/parallel/Version/short 0.04
114 TestFunctional/parallel/Version/components 0.43
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
119 TestFunctional/parallel/ImageCommands/ImageBuild 3.54
120 TestFunctional/parallel/ImageCommands/Setup 1.77
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
125 TestFunctional/parallel/MountCmd/any-port 13.79
126 TestFunctional/parallel/ProfileCmd/profile_list 0.38
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.16
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.81
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
144 TestFunctional/parallel/MountCmd/specific-port 1.85
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.3
146 TestFunctional/parallel/ServiceCmd/List 0.91
147 TestFunctional/parallel/ServiceCmd/JSONOutput 1.15
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
149 TestFunctional/parallel/ServiceCmd/Format 0.34
150 TestFunctional/parallel/ServiceCmd/URL 0.29
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 237.78
158 TestMultiControlPlane/serial/DeployApp 6.32
159 TestMultiControlPlane/serial/PingHostFromPods 1.15
160 TestMultiControlPlane/serial/AddWorkerNode 58.53
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
163 TestMultiControlPlane/serial/CopyFile 12.42
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.46
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.42
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 337.96
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
174 TestMultiControlPlane/serial/AddSecondaryNode 78.38
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
179 TestJSONOutput/start/Command 48.68
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.63
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.57
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 6.62
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 86.72
211 TestMountStart/serial/StartWithMountFirst 30.37
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 23.95
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.68
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 21.99
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 140.39
223 TestMultiNode/serial/DeployApp2Nodes 5.19
224 TestMultiNode/serial/PingHostFrom2Pods 0.81
225 TestMultiNode/serial/AddNode 48.18
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.96
229 TestMultiNode/serial/StopNode 2.17
230 TestMultiNode/serial/StartAfterStop 38.24
232 TestMultiNode/serial/DeleteNode 2.19
234 TestMultiNode/serial/RestartMultiNode 177.55
235 TestMultiNode/serial/ValidateNameConflict 40.34
242 TestScheduledStopUnix 109.61
246 TestRunningBinaryUpgrade 199.51
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 90.58
260 TestNetworkPlugins/group/false 2.78
264 TestNoKubernetes/serial/StartWithStopK8s 39.77
265 TestStoppedBinaryUpgrade/Setup 2.29
266 TestStoppedBinaryUpgrade/Upgrade 104.22
267 TestNoKubernetes/serial/Start 45.13
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
269 TestNoKubernetes/serial/ProfileList 31.27
270 TestNoKubernetes/serial/Stop 1.29
271 TestNoKubernetes/serial/StartNoArgs 23.97
279 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
282 TestPause/serial/Start 59.16
283 TestNetworkPlugins/group/auto/Start 97.56
284 TestPause/serial/SecondStartNoReconfiguration 34.22
285 TestPause/serial/Pause 0.76
286 TestPause/serial/VerifyStatus 0.27
287 TestPause/serial/Unpause 0.66
288 TestPause/serial/PauseAgain 0.86
289 TestPause/serial/DeletePaused 0.82
290 TestPause/serial/VerifyDeletedResources 0.73
291 TestNetworkPlugins/group/kindnet/Start 66.17
292 TestNetworkPlugins/group/auto/KubeletFlags 0.2
293 TestNetworkPlugins/group/auto/NetCatPod 11.24
294 TestNetworkPlugins/group/auto/DNS 0.2
295 TestNetworkPlugins/group/auto/Localhost 0.16
296 TestNetworkPlugins/group/auto/HairPin 0.15
297 TestNetworkPlugins/group/custom-flannel/Start 70.73
298 TestNetworkPlugins/group/flannel/Start 100.35
299 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
300 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
301 TestNetworkPlugins/group/kindnet/NetCatPod 10.19
302 TestNetworkPlugins/group/kindnet/DNS 0.16
303 TestNetworkPlugins/group/kindnet/Localhost 0.14
304 TestNetworkPlugins/group/kindnet/HairPin 0.15
305 TestNetworkPlugins/group/enable-default-cni/Start 108.37
306 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
307 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
308 TestNetworkPlugins/group/custom-flannel/DNS 0.16
309 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
310 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
311 TestNetworkPlugins/group/bridge/Start 58.48
312 TestNetworkPlugins/group/calico/Start 110
313 TestNetworkPlugins/group/flannel/ControllerPod 6.01
314 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
315 TestNetworkPlugins/group/flannel/NetCatPod 12.23
316 TestNetworkPlugins/group/flannel/DNS 0.16
317 TestNetworkPlugins/group/flannel/Localhost 0.12
318 TestNetworkPlugins/group/flannel/HairPin 0.12
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.26
323 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
324 TestNetworkPlugins/group/bridge/NetCatPod 13.32
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
328 TestNetworkPlugins/group/bridge/DNS 0.18
329 TestNetworkPlugins/group/bridge/Localhost 0.13
330 TestNetworkPlugins/group/bridge/HairPin 0.13
332 TestStartStop/group/no-preload/serial/FirstStart 75.44
334 TestStartStop/group/embed-certs/serial/FirstStart 87.6
335 TestNetworkPlugins/group/calico/ControllerPod 6.01
336 TestNetworkPlugins/group/calico/KubeletFlags 0.2
337 TestNetworkPlugins/group/calico/NetCatPod 11.2
338 TestNetworkPlugins/group/calico/DNS 0.17
339 TestNetworkPlugins/group/calico/Localhost 0.17
340 TestNetworkPlugins/group/calico/HairPin 0.13
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.86
343 TestStartStop/group/no-preload/serial/DeployApp 10.26
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
346 TestStartStop/group/embed-certs/serial/DeployApp 10.56
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
349 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
355 TestStartStop/group/no-preload/serial/SecondStart 679.46
357 TestStartStop/group/embed-certs/serial/SecondStart 563.27
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 567.79
360 TestStartStop/group/old-k8s-version/serial/Stop 1.28
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
372 TestStartStop/group/newest-cni/serial/FirstStart 43.93
373 TestStartStop/group/newest-cni/serial/DeployApp 0
374 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
375 TestStartStop/group/newest-cni/serial/Stop 10.37
376 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
377 TestStartStop/group/newest-cni/serial/SecondStart 36.24
378 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
381 TestStartStop/group/newest-cni/serial/Pause 4.06
x
+
TestDownloadOnly/v1.20.0/json-events (24.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-074409 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-074409 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.387933529s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-074409
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-074409: exit status 85 (56.230062ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-074409 | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |          |
	|         | -p download-only-074409        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 16:09:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 16:09:26.391505   21189 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:09:26.391759   21189 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:09:26.391769   21189 out.go:304] Setting ErrFile to fd 2...
	I0814 16:09:26.391777   21189 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:09:26.391952   21189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	W0814 16:09:26.392114   21189 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19446-13977/.minikube/config/config.json: open /home/jenkins/minikube-integration/19446-13977/.minikube/config/config.json: no such file or directory
	I0814 16:09:26.392796   21189 out.go:298] Setting JSON to true
	I0814 16:09:26.393749   21189 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3110,"bootTime":1723648656,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:09:26.393812   21189 start.go:139] virtualization: kvm guest
	I0814 16:09:26.396287   21189 out.go:97] [download-only-074409] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0814 16:09:26.396447   21189 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball: no such file or directory
	I0814 16:09:26.396478   21189 notify.go:220] Checking for updates...
	I0814 16:09:26.397861   21189 out.go:169] MINIKUBE_LOCATION=19446
	I0814 16:09:26.399369   21189 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:09:26.400593   21189 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:09:26.401889   21189 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:09:26.403205   21189 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0814 16:09:26.405450   21189 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0814 16:09:26.405685   21189 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:09:26.510210   21189 out.go:97] Using the kvm2 driver based on user configuration
	I0814 16:09:26.510242   21189 start.go:297] selected driver: kvm2
	I0814 16:09:26.510250   21189 start.go:901] validating driver "kvm2" against <nil>
	I0814 16:09:26.510578   21189 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:09:26.510701   21189 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 16:09:26.525155   21189 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 16:09:26.525222   21189 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 16:09:26.525720   21189 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0814 16:09:26.525863   21189 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 16:09:26.525893   21189 cni.go:84] Creating CNI manager for ""
	I0814 16:09:26.525903   21189 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 16:09:26.525912   21189 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 16:09:26.525963   21189 start.go:340] cluster config:
	{Name:download-only-074409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-074409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:09:26.526130   21189 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:09:26.528225   21189 out.go:97] Downloading VM boot image ...
	I0814 16:09:26.528270   21189 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19446-13977/.minikube/cache/iso/amd64/minikube-v1.33.1-1723567878-19429-amd64.iso
	I0814 16:09:36.844341   21189 out.go:97] Starting "download-only-074409" primary control-plane node in "download-only-074409" cluster
	I0814 16:09:36.844360   21189 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 16:09:36.942024   21189 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:09:36.942050   21189 cache.go:56] Caching tarball of preloaded images
	I0814 16:09:36.942217   21189 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0814 16:09:36.944188   21189 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0814 16:09:36.944218   21189 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0814 16:09:37.044799   21189 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-074409 host does not exist
	  To start a cluster, run: "minikube start -p download-only-074409"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-074409
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (13.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-495471 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-495471 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.819457246s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (13.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-495471
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-495471: exit status 85 (56.025226ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-074409 | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |                     |
	|         | -p download-only-074409        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC | 14 Aug 24 16:09 UTC |
	| delete  | -p download-only-074409        | download-only-074409 | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC | 14 Aug 24 16:09 UTC |
	| start   | -o=json --download-only        | download-only-495471 | jenkins | v1.33.1 | 14 Aug 24 16:09 UTC |                     |
	|         | -p download-only-495471        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/14 16:09:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0814 16:09:51.089024   21444 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:09:51.089242   21444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:09:51.089251   21444 out.go:304] Setting ErrFile to fd 2...
	I0814 16:09:51.089256   21444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:09:51.089423   21444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:09:51.089977   21444 out.go:298] Setting JSON to true
	I0814 16:09:51.090825   21444 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3135,"bootTime":1723648656,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:09:51.090882   21444 start.go:139] virtualization: kvm guest
	I0814 16:09:51.093238   21444 out.go:97] [download-only-495471] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:09:51.093440   21444 notify.go:220] Checking for updates...
	I0814 16:09:51.094884   21444 out.go:169] MINIKUBE_LOCATION=19446
	I0814 16:09:51.096260   21444 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:09:51.097597   21444 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:09:51.098925   21444 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:09:51.100298   21444 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0814 16:09:51.103122   21444 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0814 16:09:51.103395   21444 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:09:51.134752   21444 out.go:97] Using the kvm2 driver based on user configuration
	I0814 16:09:51.134774   21444 start.go:297] selected driver: kvm2
	I0814 16:09:51.134779   21444 start.go:901] validating driver "kvm2" against <nil>
	I0814 16:09:51.135081   21444 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:09:51.135176   21444 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19446-13977/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0814 16:09:51.150264   21444 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0814 16:09:51.150311   21444 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0814 16:09:51.150780   21444 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0814 16:09:51.150911   21444 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0814 16:09:51.150974   21444 cni.go:84] Creating CNI manager for ""
	I0814 16:09:51.150986   21444 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0814 16:09:51.150993   21444 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0814 16:09:51.151039   21444 start.go:340] cluster config:
	{Name:download-only-495471 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-495471 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:09:51.151121   21444 iso.go:125] acquiring lock: {Name:mk2e55322134d769b164591a68a4ad117a673f97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0814 16:09:51.152965   21444 out.go:97] Starting "download-only-495471" primary control-plane node in "download-only-495471" cluster
	I0814 16:09:51.152979   21444 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:09:51.664606   21444 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0814 16:09:51.664655   21444 cache.go:56] Caching tarball of preloaded images
	I0814 16:09:51.664811   21444 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0814 16:09:51.666669   21444 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0814 16:09:51.666687   21444 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0814 16:09:51.769921   21444 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19446-13977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-495471 host does not exist
	  To start a cluster, run: "minikube start -p download-only-495471"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-495471
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-629887 --alsologtostderr --binary-mirror http://127.0.0.1:46569 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-629887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-629887
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (111.51s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-972905 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-972905 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m50.691856219s)
helpers_test.go:175: Cleaning up "offline-crio-972905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-972905
--- PASS: TestOffline (111.51s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-521895
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-521895: exit status 85 (47.525497ms)

                                                
                                                
-- stdout --
	* Profile "addons-521895" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-521895"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-521895
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-521895: exit status 85 (46.389563ms)

                                                
                                                
-- stdout --
	* Profile "addons-521895" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-521895"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (176.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-521895 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-521895 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m56.204397356s)
--- PASS: TestAddons/Setup (176.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-521895 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-521895 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.234233ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-lbmb2" [4d1c8ab4-e3b2-4f6d-a2cb-c8356de3d1f8] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00350713s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rhc59" [3a27fa71-fb85-4942-be2d-fcc16d40a026] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004122592s
addons_test.go:342: (dbg) Run:  kubectl --context addons-521895 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-521895 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-521895 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.369406679s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 ip
2024/08/14 16:13:38 [DEBUG] GET http://192.168.39.170:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.4s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8hn7c" [d6435d61-905f-458c-9615-acfe29471efd] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006433366s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-521895
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-521895: (6.389237638s)
--- PASS: TestAddons/parallel/InspektorGadget (11.40s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.55s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 4.13604ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-tjffm" [be865efe-6514-4d4f-b8e3-6c2ccec2e6f2] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.003818052s
addons_test.go:475: (dbg) Run:  kubectl --context addons-521895 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-521895 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.017577082s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.55s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.842344ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-521895 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-521895 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [759de4d2-7302-417f-89cc-08232e8ff83d] Pending
helpers_test.go:344: "task-pv-pod" [759de4d2-7302-417f-89cc-08232e8ff83d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [759de4d2-7302-417f-89cc-08232e8ff83d] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004339274s
addons_test.go:590: (dbg) Run:  kubectl --context addons-521895 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-521895 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-521895 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-521895 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-521895 delete pod task-pv-pod: (1.157232887s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-521895 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-521895 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-521895 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0f12985b-0191-4dbb-87b1-6e4ad0f4c12e] Pending
helpers_test.go:344: "task-pv-pod-restore" [0f12985b-0191-4dbb-87b1-6e4ad0f4c12e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0f12985b-0191-4dbb-87b1-6e4ad0f4c12e] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.00540272s
addons_test.go:632: (dbg) Run:  kubectl --context addons-521895 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-521895 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-521895 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-521895 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.787342217s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-521895 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-v8rx4" [b44e1a4a-d89a-4ee0-a82f-74f579ca8aec] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-v8rx4" [b44e1a4a-d89a-4ee0-a82f-74f579ca8aec] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-v8rx4" [b44e1a4a-d89a-4ee0-a82f-74f579ca8aec] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004499138s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-521895 addons disable headlamp --alsologtostderr -v=1: (5.690794636s)
--- PASS: TestAddons/parallel/Headlamp (17.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-f4fhz" [686f7ce3-4c26-4b3a-9ce9-7335191805c4] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004183266s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-521895
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (62.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-521895 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-521895 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e2192833-8cbf-488c-9937-0a0c0f735e1c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e2192833-8cbf-488c-9937-0a0c0f735e1c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e2192833-8cbf-488c-9937-0a0c0f735e1c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004492792s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-521895 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 ssh "cat /opt/local-path-provisioner/pvc-230f268c-e9fb-47c8-a734-e535e5b8b6a9_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-521895 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-521895 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-521895 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.280985632s)
--- PASS: TestAddons/parallel/LocalPath (62.05s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hb8bq" [36cab318-9976-4377-b906-b14c2be76513] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004526752s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-521895
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-zw2lk" [b4ecf022-dab5-4b4b-b536-919af678f8c3] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.008419369s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-521895 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-521895 addons disable yakd --alsologtostderr -v=1: (5.711390833s)
--- PASS: TestAddons/parallel/Yakd (10.72s)

                                                
                                    
x
+
TestCertOptions (89.25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-261471 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-261471 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m28.005151062s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-261471 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-261471 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-261471 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-261471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-261471
--- PASS: TestCertOptions (89.25s)

                                                
                                    
x
+
TestCertExpiration (334.76s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-966459 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-966459 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m1.866811807s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-966459 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-966459 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m31.746037932s)
helpers_test.go:175: Cleaning up "cert-expiration-966459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-966459
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-966459: (1.149683883s)
--- PASS: TestCertExpiration (334.76s)

                                                
                                    
x
+
TestForceSystemdFlag (78.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-246775 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-246775 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.872799526s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-246775 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-246775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-246775
--- PASS: TestForceSystemdFlag (78.02s)

                                                
                                    
x
+
TestForceSystemdEnv (45.75s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-090754 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-090754 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.76667704s)
helpers_test.go:175: Cleaning up "force-systemd-env-090754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-090754
--- PASS: TestForceSystemdEnv (45.75s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.29s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.29s)

                                                
                                    
x
+
TestErrorSpam/setup (39.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-010425 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-010425 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-010425 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-010425 --driver=kvm2  --container-runtime=crio: (39.626418313s)
--- PASS: TestErrorSpam/setup (39.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (4.87s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 stop: (1.554200813s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 stop: (1.771220497s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-010425 --log_dir /tmp/nospam-010425 stop: (1.548053158s)
--- PASS: TestErrorSpam/stop (4.87s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19446-13977/.minikube/files/etc/test/nested/copy/21177/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-907634 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-907634 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (53.937295488s)
--- PASS: TestFunctional/serial/StartWithProxy (53.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-907634 --alsologtostderr -v=8
E0814 16:23:02.588927   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:02.595818   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:02.607190   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:02.628612   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:02.670015   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:02.751409   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:02.912956   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:03.234624   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:03.876676   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:05.158310   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:07.720211   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:12.842068   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:23:23.084321   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-907634 --alsologtostderr -v=8: (41.443539002s)
functional_test.go:663: soft start took 41.444221365s for "functional-907634" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-907634 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 cache add registry.k8s.io/pause:3.1
E0814 16:23:43.566130   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-907634 cache add registry.k8s.io/pause:3.1: (1.43359842s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-907634 cache add registry.k8s.io/pause:3.3: (1.409550517s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-907634 cache add registry.k8s.io/pause:latest: (1.359282552s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-907634 /tmp/TestFunctionalserialCacheCmdcacheadd_local4276160768/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 cache add minikube-local-cache-test:functional-907634
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-907634 cache add minikube-local-cache-test:functional-907634: (1.751848959s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 cache delete minikube-local-cache-test:functional-907634
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-907634
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907634 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (194.206097ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-907634 cache reload: (1.077223172s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 kubectl -- --context functional-907634 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-907634 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-907634 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-907634 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.35912173s)
functional_test.go:761: restart took 31.3592123s for "functional-907634" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-907634 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-907634 logs: (1.296057482s)
--- PASS: TestFunctional/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 logs --file /tmp/TestFunctionalserialLogsFileCmd178677110/001/logs.txt
E0814 16:24:24.528406   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-907634 logs --file /tmp/TestFunctionalserialLogsFileCmd178677110/001/logs.txt: (1.327018534s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.13s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-907634 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-907634
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-907634: exit status 115 (259.292079ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.182:32204 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-907634 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907634 config get cpus: exit status 14 (58.272263ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907634 config get cpus: exit status 14 (43.721753ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-907634 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-907634 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 31100: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-907634 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-907634 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (136.449597ms)

                                                
                                                
-- stdout --
	* [functional-907634] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:24:40.520614   30467 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:24:40.521244   30467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:24:40.521262   30467 out.go:304] Setting ErrFile to fd 2...
	I0814 16:24:40.521270   30467 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:24:40.521520   30467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:24:40.522185   30467 out.go:298] Setting JSON to false
	I0814 16:24:40.523205   30467 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4025,"bootTime":1723648656,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:24:40.523268   30467 start.go:139] virtualization: kvm guest
	I0814 16:24:40.525162   30467 out.go:177] * [functional-907634] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 16:24:40.526903   30467 notify.go:220] Checking for updates...
	I0814 16:24:40.526949   30467 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:24:40.528511   30467 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:24:40.530188   30467 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:24:40.531488   30467 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:24:40.532831   30467 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:24:40.534086   30467 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:24:40.535779   30467 config.go:182] Loaded profile config "functional-907634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:24:40.536418   30467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:24:40.536473   30467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:24:40.551782   30467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37827
	I0814 16:24:40.552230   30467 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:24:40.552777   30467 main.go:141] libmachine: Using API Version  1
	I0814 16:24:40.552799   30467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:24:40.553148   30467 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:24:40.553310   30467 main.go:141] libmachine: (functional-907634) Calling .DriverName
	I0814 16:24:40.553537   30467 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:24:40.553839   30467 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:24:40.553870   30467 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:24:40.569402   30467 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34723
	I0814 16:24:40.569954   30467 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:24:40.570546   30467 main.go:141] libmachine: Using API Version  1
	I0814 16:24:40.570562   30467 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:24:40.570941   30467 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:24:40.571162   30467 main.go:141] libmachine: (functional-907634) Calling .DriverName
	I0814 16:24:40.604685   30467 out.go:177] * Using the kvm2 driver based on existing profile
	I0814 16:24:40.605925   30467 start.go:297] selected driver: kvm2
	I0814 16:24:40.605942   30467 start.go:901] validating driver "kvm2" against &{Name:functional-907634 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-907634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:24:40.606095   30467 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:24:40.608404   30467 out.go:177] 
	W0814 16:24:40.609569   30467 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0814 16:24:40.610674   30467 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-907634 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-907634 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-907634 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.411547ms)

                                                
                                                
-- stdout --
	* [functional-907634] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:24:40.798926   30523 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:24:40.799045   30523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:24:40.799053   30523 out.go:304] Setting ErrFile to fd 2...
	I0814 16:24:40.799057   30523 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:24:40.799312   30523 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:24:40.799867   30523 out.go:298] Setting JSON to false
	I0814 16:24:40.800801   30523 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4025,"bootTime":1723648656,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 16:24:40.800854   30523 start.go:139] virtualization: kvm guest
	I0814 16:24:40.802936   30523 out.go:177] * [functional-907634] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0814 16:24:40.804453   30523 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 16:24:40.804441   30523 notify.go:220] Checking for updates...
	I0814 16:24:40.806096   30523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 16:24:40.808103   30523 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 16:24:40.809874   30523 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 16:24:40.811529   30523 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 16:24:40.812820   30523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 16:24:40.814707   30523 config.go:182] Loaded profile config "functional-907634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:24:40.815252   30523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:24:40.815346   30523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:24:40.835475   30523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35745
	I0814 16:24:40.835991   30523 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:24:40.836607   30523 main.go:141] libmachine: Using API Version  1
	I0814 16:24:40.836631   30523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:24:40.837061   30523 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:24:40.837307   30523 main.go:141] libmachine: (functional-907634) Calling .DriverName
	I0814 16:24:40.837578   30523 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 16:24:40.837924   30523 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:24:40.837969   30523 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:24:40.856261   30523 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I0814 16:24:40.856720   30523 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:24:40.857189   30523 main.go:141] libmachine: Using API Version  1
	I0814 16:24:40.857229   30523 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:24:40.857603   30523 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:24:40.857868   30523 main.go:141] libmachine: (functional-907634) Calling .DriverName
	I0814 16:24:40.893512   30523 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0814 16:24:40.895029   30523 start.go:297] selected driver: kvm2
	I0814 16:24:40.895040   30523 start.go:901] validating driver "kvm2" against &{Name:functional-907634 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19429/minikube-v1.33.1-1723567878-19429-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723567951-19429@sha256:cdfb2e55e95c0c0e857baca4796fc9879d11f99617f48df875a23169fed0e083 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-907634 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0814 16:24:40.895155   30523 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 16:24:40.897513   30523 out.go:177] 
	W0814 16:24:40.898892   30523 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0814 16:24:40.900364   30523 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-907634 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-907634 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-8fj8s" [a69cec0b-7959-476e-87af-fd4ca3f60f94] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-8fj8s" [a69cec0b-7959-476e-87af-fd4ca3f60f94] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.00943827s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 service hello-node-connect --url
functional_test.go:1649: (dbg) Done: out/minikube-linux-amd64 -p functional-907634 service hello-node-connect --url: (1.169972389s)
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.182:32602
functional_test.go:1675: http://192.168.39.182:32602: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-8fj8s

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.182:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.182:32602
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.40s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7578aeb5-34f6-488c-bc2b-1c879d6819d8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004659391s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-907634 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-907634 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-907634 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-907634 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3e97e967-3405-4adc-a294-777ee2e44137] Pending
helpers_test.go:344: "sp-pod" [3e97e967-3405-4adc-a294-777ee2e44137] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3e97e967-3405-4adc-a294-777ee2e44137] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.021252226s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-907634 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-907634 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-907634 delete -f testdata/storage-provisioner/pod.yaml: (2.775559012s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-907634 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9745ef55-0b11-47dd-addf-eecdb076fba6] Pending
helpers_test.go:344: "sp-pod" [9745ef55-0b11-47dd-addf-eecdb076fba6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9745ef55-0b11-47dd-addf-eecdb076fba6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.00482805s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-907634 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh -n functional-907634 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 cp functional-907634:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1163441312/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh -n functional-907634 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh -n functional-907634 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-907634 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-hhrqd" [3fdd8e08-db10-411b-b048-cc3fdf6709a5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-hhrqd" [3fdd8e08-db10-411b-b048-cc3fdf6709a5] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.493735735s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-907634 exec mysql-6cdb49bbb-hhrqd -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-907634 exec mysql-6cdb49bbb-hhrqd -- mysql -ppassword -e "show databases;": exit status 1 (344.523465ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-907634 exec mysql-6cdb49bbb-hhrqd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/21177/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "sudo cat /etc/test/nested/copy/21177/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/21177.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "sudo cat /etc/ssl/certs/21177.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/21177.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "sudo cat /usr/share/ca-certificates/21177.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/211772.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "sudo cat /etc/ssl/certs/211772.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/211772.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "sudo cat /usr/share/ca-certificates/211772.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-907634 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907634 ssh "sudo systemctl is-active docker": exit status 1 (183.622138ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907634 ssh "sudo systemctl is-active containerd": exit status 1 (196.069203ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-907634 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-907634 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-2dtz5" [38399f05-683a-454b-85ac-1a554b62e174] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-2dtz5" [38399f05-683a-454b-85ac-1a554b62e174] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.043257398s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-907634 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-907634
localhost/kicbase/echo-server:functional-907634
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-907634 image ls --format short --alsologtostderr:
I0814 16:25:05.882510   31540 out.go:291] Setting OutFile to fd 1 ...
I0814 16:25:05.882638   31540 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:25:05.882650   31540 out.go:304] Setting ErrFile to fd 2...
I0814 16:25:05.882657   31540 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:25:05.882834   31540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
I0814 16:25:05.883363   31540 config.go:182] Loaded profile config "functional-907634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:25:05.883490   31540 config.go:182] Loaded profile config "functional-907634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:25:05.883829   31540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 16:25:05.883879   31540 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 16:25:05.898575   31540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34217
I0814 16:25:05.899053   31540 main.go:141] libmachine: () Calling .GetVersion
I0814 16:25:05.899711   31540 main.go:141] libmachine: Using API Version  1
I0814 16:25:05.899735   31540 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 16:25:05.900063   31540 main.go:141] libmachine: () Calling .GetMachineName
I0814 16:25:05.900238   31540 main.go:141] libmachine: (functional-907634) Calling .GetState
I0814 16:25:05.902159   31540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 16:25:05.902192   31540 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 16:25:05.918345   31540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35227
I0814 16:25:05.918756   31540 main.go:141] libmachine: () Calling .GetVersion
I0814 16:25:05.919212   31540 main.go:141] libmachine: Using API Version  1
I0814 16:25:05.919233   31540 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 16:25:05.919580   31540 main.go:141] libmachine: () Calling .GetMachineName
I0814 16:25:05.919757   31540 main.go:141] libmachine: (functional-907634) Calling .DriverName
I0814 16:25:05.919909   31540 ssh_runner.go:195] Run: systemctl --version
I0814 16:25:05.919936   31540 main.go:141] libmachine: (functional-907634) Calling .GetSSHHostname
I0814 16:25:05.922526   31540 main.go:141] libmachine: (functional-907634) DBG | domain functional-907634 has defined MAC address 52:54:00:8a:88:0c in network mk-functional-907634
I0814 16:25:05.922866   31540 main.go:141] libmachine: (functional-907634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:88:0c", ip: ""} in network mk-functional-907634: {Iface:virbr1 ExpiryTime:2024-08-14 17:22:20 +0000 UTC Type:0 Mac:52:54:00:8a:88:0c Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-907634 Clientid:01:52:54:00:8a:88:0c}
I0814 16:25:05.922891   31540 main.go:141] libmachine: (functional-907634) DBG | domain functional-907634 has defined IP address 192.168.39.182 and MAC address 52:54:00:8a:88:0c in network mk-functional-907634
I0814 16:25:05.923018   31540 main.go:141] libmachine: (functional-907634) Calling .GetSSHPort
I0814 16:25:05.923172   31540 main.go:141] libmachine: (functional-907634) Calling .GetSSHKeyPath
I0814 16:25:05.923340   31540 main.go:141] libmachine: (functional-907634) Calling .GetSSHUsername
I0814 16:25:05.923502   31540 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/functional-907634/id_rsa Username:docker}
I0814 16:25:06.002960   31540 ssh_runner.go:195] Run: sudo crictl images --output json
I0814 16:25:06.041097   31540 main.go:141] libmachine: Making call to close driver server
I0814 16:25:06.041108   31540 main.go:141] libmachine: (functional-907634) Calling .Close
I0814 16:25:06.041427   31540 main.go:141] libmachine: Successfully made call to close driver server
I0814 16:25:06.041442   31540 main.go:141] libmachine: Making call to close connection to plugin binary
I0814 16:25:06.041455   31540 main.go:141] libmachine: Making call to close driver server
I0814 16:25:06.041463   31540 main.go:141] libmachine: (functional-907634) Calling .Close
I0814 16:25:06.041731   31540 main.go:141] libmachine: Successfully made call to close driver server
I0814 16:25:06.041810   31540 main.go:141] libmachine: (functional-907634) DBG | Closing plugin on server side
I0814 16:25:06.041835   31540 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-907634 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-907634  | 019c4f02deab6 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| localhost/kicbase/echo-server           | functional-907634  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 900dca2a61f57 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-907634 image ls --format table --alsologtostderr:
I0814 16:25:06.635211   31675 out.go:291] Setting OutFile to fd 1 ...
I0814 16:25:06.635322   31675 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:25:06.635347   31675 out.go:304] Setting ErrFile to fd 2...
I0814 16:25:06.635357   31675 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:25:06.635965   31675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
I0814 16:25:06.637215   31675 config.go:182] Loaded profile config "functional-907634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:25:06.637349   31675 config.go:182] Loaded profile config "functional-907634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:25:06.637763   31675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 16:25:06.637797   31675 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 16:25:06.652913   31675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
I0814 16:25:06.653361   31675 main.go:141] libmachine: () Calling .GetVersion
I0814 16:25:06.653904   31675 main.go:141] libmachine: Using API Version  1
I0814 16:25:06.653928   31675 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 16:25:06.654280   31675 main.go:141] libmachine: () Calling .GetMachineName
I0814 16:25:06.654501   31675 main.go:141] libmachine: (functional-907634) Calling .GetState
I0814 16:25:06.656408   31675 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 16:25:06.656454   31675 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 16:25:06.672701   31675 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40313
I0814 16:25:06.673081   31675 main.go:141] libmachine: () Calling .GetVersion
I0814 16:25:06.673598   31675 main.go:141] libmachine: Using API Version  1
I0814 16:25:06.673632   31675 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 16:25:06.673953   31675 main.go:141] libmachine: () Calling .GetMachineName
I0814 16:25:06.674132   31675 main.go:141] libmachine: (functional-907634) Calling .DriverName
I0814 16:25:06.674316   31675 ssh_runner.go:195] Run: systemctl --version
I0814 16:25:06.674339   31675 main.go:141] libmachine: (functional-907634) Calling .GetSSHHostname
I0814 16:25:06.677069   31675 main.go:141] libmachine: (functional-907634) DBG | domain functional-907634 has defined MAC address 52:54:00:8a:88:0c in network mk-functional-907634
I0814 16:25:06.677425   31675 main.go:141] libmachine: (functional-907634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:88:0c", ip: ""} in network mk-functional-907634: {Iface:virbr1 ExpiryTime:2024-08-14 17:22:20 +0000 UTC Type:0 Mac:52:54:00:8a:88:0c Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-907634 Clientid:01:52:54:00:8a:88:0c}
I0814 16:25:06.677451   31675 main.go:141] libmachine: (functional-907634) DBG | domain functional-907634 has defined IP address 192.168.39.182 and MAC address 52:54:00:8a:88:0c in network mk-functional-907634
I0814 16:25:06.677595   31675 main.go:141] libmachine: (functional-907634) Calling .GetSSHPort
I0814 16:25:06.677731   31675 main.go:141] libmachine: (functional-907634) Calling .GetSSHKeyPath
I0814 16:25:06.677888   31675 main.go:141] libmachine: (functional-907634) Calling .GetSSHUsername
I0814 16:25:06.677999   31675 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/functional-907634/id_rsa Username:docker}
I0814 16:25:06.763675   31675 ssh_runner.go:195] Run: sudo crictl images --output json
I0814 16:25:06.817652   31675 main.go:141] libmachine: Making call to close driver server
I0814 16:25:06.817668   31675 main.go:141] libmachine: (functional-907634) Calling .Close
I0814 16:25:06.817942   31675 main.go:141] libmachine: Successfully made call to close driver server
I0814 16:25:06.817957   31675 main.go:141] libmachine: Making call to close connection to plugin binary
I0814 16:25:06.817970   31675 main.go:141] libmachine: Making call to close driver server
I0814 16:25:06.817979   31675 main.go:141] libmachine: (functional-907634) Calling .Close
I0814 16:25:06.818186   31675 main.go:141] libmachine: Successfully made call to close driver server
I0814 16:25:06.818204   31675 main.go:141] libmachine: Making call to close connection to plugin binary
2024/08/14 16:25:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-907634 image ls --format json --alsologtostderr:
[{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528
b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"900dca2a61f5799aabe662339a940cf444dfd39777648ca6a953f82b685997ed","repoDigests":["docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40","docker.io/library/nginx@sha256:a3ab061d6909191271bcf24b9ab6eee9e8fc5f2fbf1525c5bd84d21f27a9d708"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4
631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"871654
92"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause
:3.10"],"size":"742080"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e05
51e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-907634"],"size":"4943877"},{"id":"019c4f02deab6a4391583e3531d67c2591e60d129f4beec03c37fc5ddba16e27","repoDigests":["localhost/minikube-local-cache-test@sha256:6e6e6f7f0422c3824c87d3688cd297cc6dcfae87773056263fe78ff522347c9a"],"repoTags":["localhost/minikube-local-cache-test:functional-907634"],"size":"3330"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io
/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-907634 image ls --format json --alsologtostderr:
I0814 16:25:06.425450   31627 out.go:291] Setting OutFile to fd 1 ...
I0814 16:25:06.425576   31627 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:25:06.425586   31627 out.go:304] Setting ErrFile to fd 2...
I0814 16:25:06.425595   31627 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:25:06.425774   31627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
I0814 16:25:06.426324   31627 config.go:182] Loaded profile config "functional-907634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:25:06.426428   31627 config.go:182] Loaded profile config "functional-907634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:25:06.426864   31627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 16:25:06.426916   31627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 16:25:06.442048   31627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33523
I0814 16:25:06.442514   31627 main.go:141] libmachine: () Calling .GetVersion
I0814 16:25:06.443129   31627 main.go:141] libmachine: Using API Version  1
I0814 16:25:06.443160   31627 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 16:25:06.443506   31627 main.go:141] libmachine: () Calling .GetMachineName
I0814 16:25:06.443746   31627 main.go:141] libmachine: (functional-907634) Calling .GetState
I0814 16:25:06.445735   31627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 16:25:06.445782   31627 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 16:25:06.461010   31627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43891
I0814 16:25:06.461419   31627 main.go:141] libmachine: () Calling .GetVersion
I0814 16:25:06.461834   31627 main.go:141] libmachine: Using API Version  1
I0814 16:25:06.461856   31627 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 16:25:06.462194   31627 main.go:141] libmachine: () Calling .GetMachineName
I0814 16:25:06.462363   31627 main.go:141] libmachine: (functional-907634) Calling .DriverName
I0814 16:25:06.462547   31627 ssh_runner.go:195] Run: systemctl --version
I0814 16:25:06.462572   31627 main.go:141] libmachine: (functional-907634) Calling .GetSSHHostname
I0814 16:25:06.465988   31627 main.go:141] libmachine: (functional-907634) DBG | domain functional-907634 has defined MAC address 52:54:00:8a:88:0c in network mk-functional-907634
I0814 16:25:06.466423   31627 main.go:141] libmachine: (functional-907634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:88:0c", ip: ""} in network mk-functional-907634: {Iface:virbr1 ExpiryTime:2024-08-14 17:22:20 +0000 UTC Type:0 Mac:52:54:00:8a:88:0c Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-907634 Clientid:01:52:54:00:8a:88:0c}
I0814 16:25:06.466458   31627 main.go:141] libmachine: (functional-907634) DBG | domain functional-907634 has defined IP address 192.168.39.182 and MAC address 52:54:00:8a:88:0c in network mk-functional-907634
I0814 16:25:06.466588   31627 main.go:141] libmachine: (functional-907634) Calling .GetSSHPort
I0814 16:25:06.466721   31627 main.go:141] libmachine: (functional-907634) Calling .GetSSHKeyPath
I0814 16:25:06.466860   31627 main.go:141] libmachine: (functional-907634) Calling .GetSSHUsername
I0814 16:25:06.466940   31627 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/functional-907634/id_rsa Username:docker}
I0814 16:25:06.546115   31627 ssh_runner.go:195] Run: sudo crictl images --output json
I0814 16:25:06.591027   31627 main.go:141] libmachine: Making call to close driver server
I0814 16:25:06.591052   31627 main.go:141] libmachine: (functional-907634) Calling .Close
I0814 16:25:06.591352   31627 main.go:141] libmachine: Successfully made call to close driver server
I0814 16:25:06.591371   31627 main.go:141] libmachine: Making call to close connection to plugin binary
I0814 16:25:06.591378   31627 main.go:141] libmachine: (functional-907634) DBG | Closing plugin on server side
I0814 16:25:06.591385   31627 main.go:141] libmachine: Making call to close driver server
I0814 16:25:06.591394   31627 main.go:141] libmachine: (functional-907634) Calling .Close
I0814 16:25:06.591626   31627 main.go:141] libmachine: (functional-907634) DBG | Closing plugin on server side
I0814 16:25:06.591664   31627 main.go:141] libmachine: Successfully made call to close driver server
I0814 16:25:06.591675   31627 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-907634 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 019c4f02deab6a4391583e3531d67c2591e60d129f4beec03c37fc5ddba16e27
repoDigests:
- localhost/minikube-local-cache-test@sha256:6e6e6f7f0422c3824c87d3688cd297cc6dcfae87773056263fe78ff522347c9a
repoTags:
- localhost/minikube-local-cache-test:functional-907634
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 900dca2a61f5799aabe662339a940cf444dfd39777648ca6a953f82b685997ed
repoDigests:
- docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40
- docker.io/library/nginx@sha256:a3ab061d6909191271bcf24b9ab6eee9e8fc5f2fbf1525c5bd84d21f27a9d708
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-907634
size: "4943877"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-907634 image ls --format yaml --alsologtostderr:
I0814 16:25:06.084641   31564 out.go:291] Setting OutFile to fd 1 ...
I0814 16:25:06.084946   31564 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:25:06.084957   31564 out.go:304] Setting ErrFile to fd 2...
I0814 16:25:06.084963   31564 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:25:06.085155   31564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
I0814 16:25:06.085701   31564 config.go:182] Loaded profile config "functional-907634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:25:06.085811   31564 config.go:182] Loaded profile config "functional-907634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:25:06.086170   31564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 16:25:06.086245   31564 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 16:25:06.100667   31564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
I0814 16:25:06.101072   31564 main.go:141] libmachine: () Calling .GetVersion
I0814 16:25:06.101566   31564 main.go:141] libmachine: Using API Version  1
I0814 16:25:06.101587   31564 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 16:25:06.101888   31564 main.go:141] libmachine: () Calling .GetMachineName
I0814 16:25:06.102103   31564 main.go:141] libmachine: (functional-907634) Calling .GetState
I0814 16:25:06.103940   31564 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 16:25:06.103977   31564 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 16:25:06.118027   31564 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44771
I0814 16:25:06.118419   31564 main.go:141] libmachine: () Calling .GetVersion
I0814 16:25:06.118782   31564 main.go:141] libmachine: Using API Version  1
I0814 16:25:06.118803   31564 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 16:25:06.119090   31564 main.go:141] libmachine: () Calling .GetMachineName
I0814 16:25:06.119262   31564 main.go:141] libmachine: (functional-907634) Calling .DriverName
I0814 16:25:06.119451   31564 ssh_runner.go:195] Run: systemctl --version
I0814 16:25:06.119476   31564 main.go:141] libmachine: (functional-907634) Calling .GetSSHHostname
I0814 16:25:06.121955   31564 main.go:141] libmachine: (functional-907634) DBG | domain functional-907634 has defined MAC address 52:54:00:8a:88:0c in network mk-functional-907634
I0814 16:25:06.122309   31564 main.go:141] libmachine: (functional-907634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:88:0c", ip: ""} in network mk-functional-907634: {Iface:virbr1 ExpiryTime:2024-08-14 17:22:20 +0000 UTC Type:0 Mac:52:54:00:8a:88:0c Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-907634 Clientid:01:52:54:00:8a:88:0c}
I0814 16:25:06.122335   31564 main.go:141] libmachine: (functional-907634) DBG | domain functional-907634 has defined IP address 192.168.39.182 and MAC address 52:54:00:8a:88:0c in network mk-functional-907634
I0814 16:25:06.122438   31564 main.go:141] libmachine: (functional-907634) Calling .GetSSHPort
I0814 16:25:06.122597   31564 main.go:141] libmachine: (functional-907634) Calling .GetSSHKeyPath
I0814 16:25:06.122744   31564 main.go:141] libmachine: (functional-907634) Calling .GetSSHUsername
I0814 16:25:06.122861   31564 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/functional-907634/id_rsa Username:docker}
I0814 16:25:06.197914   31564 ssh_runner.go:195] Run: sudo crictl images --output json
I0814 16:25:06.240079   31564 main.go:141] libmachine: Making call to close driver server
I0814 16:25:06.240091   31564 main.go:141] libmachine: (functional-907634) Calling .Close
I0814 16:25:06.240410   31564 main.go:141] libmachine: (functional-907634) DBG | Closing plugin on server side
I0814 16:25:06.240430   31564 main.go:141] libmachine: Successfully made call to close driver server
I0814 16:25:06.240450   31564 main.go:141] libmachine: Making call to close connection to plugin binary
I0814 16:25:06.240462   31564 main.go:141] libmachine: Making call to close driver server
I0814 16:25:06.240472   31564 main.go:141] libmachine: (functional-907634) Calling .Close
I0814 16:25:06.240678   31564 main.go:141] libmachine: Successfully made call to close driver server
I0814 16:25:06.240700   31564 main.go:141] libmachine: (functional-907634) DBG | Closing plugin on server side
I0814 16:25:06.240705   31564 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907634 ssh pgrep buildkitd: exit status 1 (200.147108ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image build -t localhost/my-image:functional-907634 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-907634 image build -t localhost/my-image:functional-907634 testdata/build --alsologtostderr: (3.125277815s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-907634 image build -t localhost/my-image:functional-907634 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b5ab60f2d06
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-907634
--> 5acc37088ad
Successfully tagged localhost/my-image:functional-907634
5acc37088ad6e248d1a4b381f553ac1bbe9a2fbfdc643ebe82d00fe1b6211e88
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-907634 image build -t localhost/my-image:functional-907634 testdata/build --alsologtostderr:
I0814 16:25:06.485626   31646 out.go:291] Setting OutFile to fd 1 ...
I0814 16:25:06.486000   31646 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:25:06.486014   31646 out.go:304] Setting ErrFile to fd 2...
I0814 16:25:06.486025   31646 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0814 16:25:06.486289   31646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
I0814 16:25:06.486871   31646 config.go:182] Loaded profile config "functional-907634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:25:06.487446   31646 config.go:182] Loaded profile config "functional-907634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0814 16:25:06.487819   31646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 16:25:06.487884   31646 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 16:25:06.502593   31646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34563
I0814 16:25:06.502976   31646 main.go:141] libmachine: () Calling .GetVersion
I0814 16:25:06.503465   31646 main.go:141] libmachine: Using API Version  1
I0814 16:25:06.503487   31646 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 16:25:06.503872   31646 main.go:141] libmachine: () Calling .GetMachineName
I0814 16:25:06.504067   31646 main.go:141] libmachine: (functional-907634) Calling .GetState
I0814 16:25:06.505788   31646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0814 16:25:06.505822   31646 main.go:141] libmachine: Launching plugin server for driver kvm2
I0814 16:25:06.520154   31646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42237
I0814 16:25:06.520613   31646 main.go:141] libmachine: () Calling .GetVersion
I0814 16:25:06.521120   31646 main.go:141] libmachine: Using API Version  1
I0814 16:25:06.521148   31646 main.go:141] libmachine: () Calling .SetConfigRaw
I0814 16:25:06.521478   31646 main.go:141] libmachine: () Calling .GetMachineName
I0814 16:25:06.521643   31646 main.go:141] libmachine: (functional-907634) Calling .DriverName
I0814 16:25:06.521832   31646 ssh_runner.go:195] Run: systemctl --version
I0814 16:25:06.521851   31646 main.go:141] libmachine: (functional-907634) Calling .GetSSHHostname
I0814 16:25:06.524518   31646 main.go:141] libmachine: (functional-907634) DBG | domain functional-907634 has defined MAC address 52:54:00:8a:88:0c in network mk-functional-907634
I0814 16:25:06.524911   31646 main.go:141] libmachine: (functional-907634) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:88:0c", ip: ""} in network mk-functional-907634: {Iface:virbr1 ExpiryTime:2024-08-14 17:22:20 +0000 UTC Type:0 Mac:52:54:00:8a:88:0c Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-907634 Clientid:01:52:54:00:8a:88:0c}
I0814 16:25:06.524938   31646 main.go:141] libmachine: (functional-907634) DBG | domain functional-907634 has defined IP address 192.168.39.182 and MAC address 52:54:00:8a:88:0c in network mk-functional-907634
I0814 16:25:06.525101   31646 main.go:141] libmachine: (functional-907634) Calling .GetSSHPort
I0814 16:25:06.525278   31646 main.go:141] libmachine: (functional-907634) Calling .GetSSHKeyPath
I0814 16:25:06.525442   31646 main.go:141] libmachine: (functional-907634) Calling .GetSSHUsername
I0814 16:25:06.525602   31646 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/functional-907634/id_rsa Username:docker}
I0814 16:25:06.602622   31646 build_images.go:161] Building image from path: /tmp/build.4238656166.tar
I0814 16:25:06.602680   31646 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0814 16:25:06.613481   31646 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4238656166.tar
I0814 16:25:06.619184   31646 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4238656166.tar: stat -c "%s %y" /var/lib/minikube/build/build.4238656166.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4238656166.tar': No such file or directory
I0814 16:25:06.619214   31646 ssh_runner.go:362] scp /tmp/build.4238656166.tar --> /var/lib/minikube/build/build.4238656166.tar (3072 bytes)
I0814 16:25:06.653509   31646 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4238656166
I0814 16:25:06.663715   31646 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4238656166 -xf /var/lib/minikube/build/build.4238656166.tar
I0814 16:25:06.674547   31646 crio.go:315] Building image: /var/lib/minikube/build/build.4238656166
I0814 16:25:06.674612   31646 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-907634 /var/lib/minikube/build/build.4238656166 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0814 16:25:09.543793   31646 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-907634 /var/lib/minikube/build/build.4238656166 --cgroup-manager=cgroupfs: (2.86915879s)
I0814 16:25:09.543853   31646 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4238656166
I0814 16:25:09.554658   31646 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4238656166.tar
I0814 16:25:09.564777   31646 build_images.go:217] Built localhost/my-image:functional-907634 from /tmp/build.4238656166.tar
I0814 16:25:09.564815   31646 build_images.go:133] succeeded building to: functional-907634
I0814 16:25:09.564821   31646 build_images.go:134] failed building to: 
I0814 16:25:09.564846   31646 main.go:141] libmachine: Making call to close driver server
I0814 16:25:09.564862   31646 main.go:141] libmachine: (functional-907634) Calling .Close
I0814 16:25:09.565108   31646 main.go:141] libmachine: Successfully made call to close driver server
I0814 16:25:09.565127   31646 main.go:141] libmachine: Making call to close connection to plugin binary
I0814 16:25:09.565133   31646 main.go:141] libmachine: (functional-907634) DBG | Closing plugin on server side
I0814 16:25:09.565141   31646 main.go:141] libmachine: Making call to close driver server
I0814 16:25:09.565157   31646 main.go:141] libmachine: (functional-907634) Calling .Close
I0814 16:25:09.565354   31646 main.go:141] libmachine: Successfully made call to close driver server
I0814 16:25:09.565376   31646 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.750506251s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-907634
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-907634 /tmp/TestFunctionalparallelMountCmdany-port1891794052/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723652671296048386" to /tmp/TestFunctionalparallelMountCmdany-port1891794052/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723652671296048386" to /tmp/TestFunctionalparallelMountCmdany-port1891794052/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723652671296048386" to /tmp/TestFunctionalparallelMountCmdany-port1891794052/001/test-1723652671296048386
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907634 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.899239ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 14 16:24 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 14 16:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 14 16:24 test-1723652671296048386
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh cat /mount-9p/test-1723652671296048386
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-907634 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c73e81aa-5470-4fbc-ac79-f7fa31003f63] Pending
helpers_test.go:344: "busybox-mount" [c73e81aa-5470-4fbc-ac79-f7fa31003f63] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c73e81aa-5470-4fbc-ac79-f7fa31003f63] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c73e81aa-5470-4fbc-ac79-f7fa31003f63] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.005479648s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-907634 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-907634 /tmp/TestFunctionalparallelMountCmdany-port1891794052/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.79s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "313.288299ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "61.582393ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image load --daemon kicbase/echo-server:functional-907634 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-907634 image load --daemon kicbase/echo-server:functional-907634 --alsologtostderr: (2.959792016s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "286.812334ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.14272ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image load --daemon kicbase/echo-server:functional-907634 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-907634
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image load --daemon kicbase/echo-server:functional-907634 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image save kicbase/echo-server:functional-907634 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image rm kicbase/echo-server:functional-907634 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-907634
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 image save --daemon kicbase/echo-server:functional-907634 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-907634
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-907634 /tmp/TestFunctionalparallelMountCmdspecific-port97578320/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907634 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (178.088256ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-907634 /tmp/TestFunctionalparallelMountCmdspecific-port97578320/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907634 ssh "sudo umount -f /mount-9p": exit status 1 (207.241389ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-907634 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-907634 /tmp/TestFunctionalparallelMountCmdspecific-port97578320/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-907634 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1238876736/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-907634 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1238876736/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-907634 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1238876736/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-907634 ssh "findmnt -T" /mount1: exit status 1 (302.513848ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-907634 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-907634 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1238876736/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-907634 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1238876736/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-907634 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1238876736/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-907634 service list -o json: (1.147253964s)
functional_test.go:1494: Took "1.147373412s" to run "out/minikube-linux-amd64 -p functional-907634 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.182:30496
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-907634 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.182:30496
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-907634
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-907634
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-907634
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (237.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-597780 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0814 16:25:46.450344   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:28:02.589066   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:28:30.292666   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-597780 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m57.121376059s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (237.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-597780 -- rollout status deployment/busybox: (4.250508692s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-27k42 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-rq7wd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-w9lh2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-27k42 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-rq7wd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-w9lh2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-27k42 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-rq7wd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-w9lh2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-27k42 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-27k42 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-rq7wd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-rq7wd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-w9lh2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-597780 -- exec busybox-7dff88458-w9lh2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-597780 -v=7 --alsologtostderr
E0814 16:29:29.460523   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:29:29.466942   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:29:29.478363   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:29:29.499751   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:29:29.541224   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:29:29.622679   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:29:29.784227   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:29:30.105970   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:29:30.747468   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:29:32.029320   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:29:34.591435   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:29:39.713713   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:29:49.955514   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:30:10.437020   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-597780 -v=7 --alsologtostderr: (57.715841406s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-597780 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp testdata/cp-test.txt ha-597780:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3967682573/001/cp-test_ha-597780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780:/home/docker/cp-test.txt ha-597780-m02:/home/docker/cp-test_ha-597780_ha-597780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m02 "sudo cat /home/docker/cp-test_ha-597780_ha-597780-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780:/home/docker/cp-test.txt ha-597780-m03:/home/docker/cp-test_ha-597780_ha-597780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m03 "sudo cat /home/docker/cp-test_ha-597780_ha-597780-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780:/home/docker/cp-test.txt ha-597780-m04:/home/docker/cp-test_ha-597780_ha-597780-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m04 "sudo cat /home/docker/cp-test_ha-597780_ha-597780-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp testdata/cp-test.txt ha-597780-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3967682573/001/cp-test_ha-597780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780-m02:/home/docker/cp-test.txt ha-597780:/home/docker/cp-test_ha-597780-m02_ha-597780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780 "sudo cat /home/docker/cp-test_ha-597780-m02_ha-597780.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780-m02:/home/docker/cp-test.txt ha-597780-m03:/home/docker/cp-test_ha-597780-m02_ha-597780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m03 "sudo cat /home/docker/cp-test_ha-597780-m02_ha-597780-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780-m02:/home/docker/cp-test.txt ha-597780-m04:/home/docker/cp-test_ha-597780-m02_ha-597780-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m04 "sudo cat /home/docker/cp-test_ha-597780-m02_ha-597780-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp testdata/cp-test.txt ha-597780-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3967682573/001/cp-test_ha-597780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt ha-597780:/home/docker/cp-test_ha-597780-m03_ha-597780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780 "sudo cat /home/docker/cp-test_ha-597780-m03_ha-597780.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt ha-597780-m02:/home/docker/cp-test_ha-597780-m03_ha-597780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m02 "sudo cat /home/docker/cp-test_ha-597780-m03_ha-597780-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780-m03:/home/docker/cp-test.txt ha-597780-m04:/home/docker/cp-test_ha-597780-m03_ha-597780-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m04 "sudo cat /home/docker/cp-test_ha-597780-m03_ha-597780-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp testdata/cp-test.txt ha-597780-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3967682573/001/cp-test_ha-597780-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt ha-597780:/home/docker/cp-test_ha-597780-m04_ha-597780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780 "sudo cat /home/docker/cp-test_ha-597780-m04_ha-597780.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt ha-597780-m02:/home/docker/cp-test_ha-597780-m04_ha-597780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m02 "sudo cat /home/docker/cp-test_ha-597780-m04_ha-597780-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 cp ha-597780-m04:/home/docker/cp-test.txt ha-597780-m03:/home/docker/cp-test_ha-597780-m04_ha-597780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 ssh -n ha-597780-m03 "sudo cat /home/docker/cp-test_ha-597780-m04_ha-597780-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.458125184s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-597780 node delete m03 -v=7 --alsologtostderr: (15.707297727s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (337.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-597780 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0814 16:44:29.462552   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:45:52.525307   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:48:02.588490   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-597780 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m37.192344686s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (337.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-597780 --control-plane -v=7 --alsologtostderr
E0814 16:49:29.460179   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-597780 --control-plane -v=7 --alsologtostderr: (1m17.577095855s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-597780 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-948333 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-948333 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (48.675648957s)
--- PASS: TestJSONOutput/start/Command (48.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-948333 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-948333 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-948333 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-948333 --output=json --user=testUser: (6.616180983s)
--- PASS: TestJSONOutput/stop/Command (6.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-340128 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-340128 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.257984ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c0e9e3d2-d6c5-4f62-896d-8871b0be4110","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-340128] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b8ce44e-f100-4e78-a98e-a24fe11d23b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19446"}}
	{"specversion":"1.0","id":"bc8c9722-6695-4bc0-be36-e3517aa14ce5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bc82a4ee-e57d-4e13-89f4-7bf0bddb4f80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig"}}
	{"specversion":"1.0","id":"888bbd35-9be0-4393-ae69-d27f8625e996","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube"}}
	{"specversion":"1.0","id":"6fa03ee3-1011-436b-9140-e3f2695c904b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5fb29a09-836e-4427-8052-28005e017ef2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"872b9f47-7501-498e-93a9-3e1e73049d05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-340128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-340128
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (86.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-114044 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-114044 --driver=kvm2  --container-runtime=crio: (41.913687031s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-116974 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-116974 --driver=kvm2  --container-runtime=crio: (42.42184414s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-114044
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-116974
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-116974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-116974
helpers_test.go:175: Cleaning up "first-114044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-114044
--- PASS: TestMinikubeProfile (86.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.37s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-890373 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0814 16:53:02.588783   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-890373 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.369570549s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-890373 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-890373 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-903718 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-903718 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.945638395s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-903718 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-903718 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-890373 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-903718 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-903718 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-903718
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-903718: (1.273662333s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-903718
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-903718: (20.989923283s)
--- PASS: TestMountStart/serial/RestartStopped (21.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-903718 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-903718 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (140.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-986999 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0814 16:54:29.460541   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
E0814 16:56:05.656318   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-986999 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m19.993789329s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (140.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-986999 -- rollout status deployment/busybox: (3.778198639s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- exec busybox-7dff88458-2skwv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- exec busybox-7dff88458-72ch6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- exec busybox-7dff88458-2skwv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- exec busybox-7dff88458-72ch6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- exec busybox-7dff88458-2skwv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- exec busybox-7dff88458-72ch6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- exec busybox-7dff88458-2skwv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- exec busybox-7dff88458-2skwv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- exec busybox-7dff88458-72ch6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-986999 -- exec busybox-7dff88458-72ch6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-986999 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-986999 -v 3 --alsologtostderr: (47.633166011s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.18s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-986999 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 cp testdata/cp-test.txt multinode-986999:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 cp multinode-986999:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3655799611/001/cp-test_multinode-986999.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 cp multinode-986999:/home/docker/cp-test.txt multinode-986999-m02:/home/docker/cp-test_multinode-986999_multinode-986999-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999-m02 "sudo cat /home/docker/cp-test_multinode-986999_multinode-986999-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 cp multinode-986999:/home/docker/cp-test.txt multinode-986999-m03:/home/docker/cp-test_multinode-986999_multinode-986999-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999-m03 "sudo cat /home/docker/cp-test_multinode-986999_multinode-986999-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 cp testdata/cp-test.txt multinode-986999-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 cp multinode-986999-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3655799611/001/cp-test_multinode-986999-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 cp multinode-986999-m02:/home/docker/cp-test.txt multinode-986999:/home/docker/cp-test_multinode-986999-m02_multinode-986999.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999 "sudo cat /home/docker/cp-test_multinode-986999-m02_multinode-986999.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 cp multinode-986999-m02:/home/docker/cp-test.txt multinode-986999-m03:/home/docker/cp-test_multinode-986999-m02_multinode-986999-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999-m03 "sudo cat /home/docker/cp-test_multinode-986999-m02_multinode-986999-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 cp testdata/cp-test.txt multinode-986999-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 cp multinode-986999-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3655799611/001/cp-test_multinode-986999-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 cp multinode-986999-m03:/home/docker/cp-test.txt multinode-986999:/home/docker/cp-test_multinode-986999-m03_multinode-986999.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999 "sudo cat /home/docker/cp-test_multinode-986999-m03_multinode-986999.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 cp multinode-986999-m03:/home/docker/cp-test.txt multinode-986999-m02:/home/docker/cp-test_multinode-986999-m03_multinode-986999-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 ssh -n multinode-986999-m02 "sudo cat /home/docker/cp-test_multinode-986999-m03_multinode-986999-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-986999 node stop m03: (1.372214601s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-986999 status: exit status 7 (401.00977ms)

                                                
                                                
-- stdout --
	multinode-986999
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-986999-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-986999-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-986999 status --alsologtostderr: exit status 7 (395.328948ms)

                                                
                                                
-- stdout --
	multinode-986999
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-986999-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-986999-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 16:57:44.606841   49313 out.go:291] Setting OutFile to fd 1 ...
	I0814 16:57:44.606960   49313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:57:44.606970   49313 out.go:304] Setting ErrFile to fd 2...
	I0814 16:57:44.606976   49313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 16:57:44.607163   49313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 16:57:44.607405   49313 out.go:298] Setting JSON to false
	I0814 16:57:44.607439   49313 mustload.go:65] Loading cluster: multinode-986999
	I0814 16:57:44.607464   49313 notify.go:220] Checking for updates...
	I0814 16:57:44.607823   49313 config.go:182] Loaded profile config "multinode-986999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 16:57:44.607840   49313 status.go:255] checking status of multinode-986999 ...
	I0814 16:57:44.608244   49313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:57:44.608309   49313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:57:44.627291   49313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I0814 16:57:44.627737   49313 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:57:44.628311   49313 main.go:141] libmachine: Using API Version  1
	I0814 16:57:44.628333   49313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:57:44.628712   49313 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:57:44.628926   49313 main.go:141] libmachine: (multinode-986999) Calling .GetState
	I0814 16:57:44.630522   49313 status.go:330] multinode-986999 host status = "Running" (err=<nil>)
	I0814 16:57:44.630545   49313 host.go:66] Checking if "multinode-986999" exists ...
	I0814 16:57:44.630844   49313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:57:44.630887   49313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:57:44.645826   49313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0814 16:57:44.646156   49313 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:57:44.646574   49313 main.go:141] libmachine: Using API Version  1
	I0814 16:57:44.646630   49313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:57:44.646900   49313 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:57:44.647070   49313 main.go:141] libmachine: (multinode-986999) Calling .GetIP
	I0814 16:57:44.649504   49313 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 16:57:44.649855   49313 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 16:57:44.649890   49313 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 16:57:44.650027   49313 host.go:66] Checking if "multinode-986999" exists ...
	I0814 16:57:44.650403   49313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:57:44.650444   49313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:57:44.664870   49313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43113
	I0814 16:57:44.665203   49313 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:57:44.665607   49313 main.go:141] libmachine: Using API Version  1
	I0814 16:57:44.665631   49313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:57:44.665932   49313 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:57:44.666102   49313 main.go:141] libmachine: (multinode-986999) Calling .DriverName
	I0814 16:57:44.666254   49313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:57:44.666274   49313 main.go:141] libmachine: (multinode-986999) Calling .GetSSHHostname
	I0814 16:57:44.668587   49313 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 16:57:44.668948   49313 main.go:141] libmachine: (multinode-986999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:cc:65", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:54:34 +0000 UTC Type:0 Mac:52:54:00:23:cc:65 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-986999 Clientid:01:52:54:00:23:cc:65}
	I0814 16:57:44.668966   49313 main.go:141] libmachine: (multinode-986999) DBG | domain multinode-986999 has defined IP address 192.168.39.36 and MAC address 52:54:00:23:cc:65 in network mk-multinode-986999
	I0814 16:57:44.669081   49313 main.go:141] libmachine: (multinode-986999) Calling .GetSSHPort
	I0814 16:57:44.669226   49313 main.go:141] libmachine: (multinode-986999) Calling .GetSSHKeyPath
	I0814 16:57:44.669386   49313 main.go:141] libmachine: (multinode-986999) Calling .GetSSHUsername
	I0814 16:57:44.669512   49313 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/multinode-986999/id_rsa Username:docker}
	I0814 16:57:44.747432   49313 ssh_runner.go:195] Run: systemctl --version
	I0814 16:57:44.753295   49313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:57:44.767610   49313 kubeconfig.go:125] found "multinode-986999" server: "https://192.168.39.36:8443"
	I0814 16:57:44.767655   49313 api_server.go:166] Checking apiserver status ...
	I0814 16:57:44.767714   49313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0814 16:57:44.781117   49313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1103/cgroup
	W0814 16:57:44.790438   49313 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1103/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0814 16:57:44.790489   49313 ssh_runner.go:195] Run: ls
	I0814 16:57:44.794464   49313 api_server.go:253] Checking apiserver healthz at https://192.168.39.36:8443/healthz ...
	I0814 16:57:44.798360   49313 api_server.go:279] https://192.168.39.36:8443/healthz returned 200:
	ok
	I0814 16:57:44.798384   49313 status.go:422] multinode-986999 apiserver status = Running (err=<nil>)
	I0814 16:57:44.798398   49313 status.go:257] multinode-986999 status: &{Name:multinode-986999 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:57:44.798419   49313 status.go:255] checking status of multinode-986999-m02 ...
	I0814 16:57:44.798701   49313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:57:44.798731   49313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:57:44.813546   49313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0814 16:57:44.813943   49313 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:57:44.814361   49313 main.go:141] libmachine: Using API Version  1
	I0814 16:57:44.814379   49313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:57:44.814712   49313 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:57:44.814876   49313 main.go:141] libmachine: (multinode-986999-m02) Calling .GetState
	I0814 16:57:44.816299   49313 status.go:330] multinode-986999-m02 host status = "Running" (err=<nil>)
	I0814 16:57:44.816316   49313 host.go:66] Checking if "multinode-986999-m02" exists ...
	I0814 16:57:44.816578   49313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:57:44.816607   49313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:57:44.831244   49313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0814 16:57:44.831610   49313 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:57:44.832052   49313 main.go:141] libmachine: Using API Version  1
	I0814 16:57:44.832090   49313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:57:44.832379   49313 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:57:44.832560   49313 main.go:141] libmachine: (multinode-986999-m02) Calling .GetIP
	I0814 16:57:44.834939   49313 main.go:141] libmachine: (multinode-986999-m02) DBG | domain multinode-986999-m02 has defined MAC address 52:54:00:d7:08:b3 in network mk-multinode-986999
	I0814 16:57:44.835363   49313 main.go:141] libmachine: (multinode-986999-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:08:b3", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:55:35 +0000 UTC Type:0 Mac:52:54:00:d7:08:b3 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-986999-m02 Clientid:01:52:54:00:d7:08:b3}
	I0814 16:57:44.835383   49313 main.go:141] libmachine: (multinode-986999-m02) DBG | domain multinode-986999-m02 has defined IP address 192.168.39.2 and MAC address 52:54:00:d7:08:b3 in network mk-multinode-986999
	I0814 16:57:44.835573   49313 host.go:66] Checking if "multinode-986999-m02" exists ...
	I0814 16:57:44.835904   49313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:57:44.835946   49313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:57:44.850290   49313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
	I0814 16:57:44.850612   49313 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:57:44.850971   49313 main.go:141] libmachine: Using API Version  1
	I0814 16:57:44.850988   49313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:57:44.851269   49313 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:57:44.851459   49313 main.go:141] libmachine: (multinode-986999-m02) Calling .DriverName
	I0814 16:57:44.851630   49313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0814 16:57:44.851655   49313 main.go:141] libmachine: (multinode-986999-m02) Calling .GetSSHHostname
	I0814 16:57:44.853976   49313 main.go:141] libmachine: (multinode-986999-m02) DBG | domain multinode-986999-m02 has defined MAC address 52:54:00:d7:08:b3 in network mk-multinode-986999
	I0814 16:57:44.854356   49313 main.go:141] libmachine: (multinode-986999-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:08:b3", ip: ""} in network mk-multinode-986999: {Iface:virbr1 ExpiryTime:2024-08-14 17:55:35 +0000 UTC Type:0 Mac:52:54:00:d7:08:b3 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:multinode-986999-m02 Clientid:01:52:54:00:d7:08:b3}
	I0814 16:57:44.854386   49313 main.go:141] libmachine: (multinode-986999-m02) DBG | domain multinode-986999-m02 has defined IP address 192.168.39.2 and MAC address 52:54:00:d7:08:b3 in network mk-multinode-986999
	I0814 16:57:44.854533   49313 main.go:141] libmachine: (multinode-986999-m02) Calling .GetSSHPort
	I0814 16:57:44.854696   49313 main.go:141] libmachine: (multinode-986999-m02) Calling .GetSSHKeyPath
	I0814 16:57:44.854815   49313 main.go:141] libmachine: (multinode-986999-m02) Calling .GetSSHUsername
	I0814 16:57:44.854958   49313 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19446-13977/.minikube/machines/multinode-986999-m02/id_rsa Username:docker}
	I0814 16:57:44.929764   49313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0814 16:57:44.943237   49313 status.go:257] multinode-986999-m02 status: &{Name:multinode-986999-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0814 16:57:44.943270   49313 status.go:255] checking status of multinode-986999-m03 ...
	I0814 16:57:44.943614   49313 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0814 16:57:44.943662   49313 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0814 16:57:44.958690   49313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41429
	I0814 16:57:44.959181   49313 main.go:141] libmachine: () Calling .GetVersion
	I0814 16:57:44.959684   49313 main.go:141] libmachine: Using API Version  1
	I0814 16:57:44.959706   49313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0814 16:57:44.960009   49313 main.go:141] libmachine: () Calling .GetMachineName
	I0814 16:57:44.960175   49313 main.go:141] libmachine: (multinode-986999-m03) Calling .GetState
	I0814 16:57:44.961575   49313 status.go:330] multinode-986999-m03 host status = "Stopped" (err=<nil>)
	I0814 16:57:44.961589   49313 status.go:343] host is not running, skipping remaining checks
	I0814 16:57:44.961597   49313 status.go:257] multinode-986999-m03 status: &{Name:multinode-986999-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 node start m03 -v=7 --alsologtostderr
E0814 16:58:02.588645   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-986999 node start m03 -v=7 --alsologtostderr: (37.641364642s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-986999 node delete m03: (1.677449837s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (177.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-986999 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0814 17:08:02.589309   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-986999 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m57.052449732s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-986999 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (177.55s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-986999
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-986999-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-986999-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.61503ms)

                                                
                                                
-- stdout --
	* [multinode-986999-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-986999-m02' is duplicated with machine name 'multinode-986999-m02' in profile 'multinode-986999'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-986999-m03 --driver=kvm2  --container-runtime=crio
E0814 17:09:29.459824   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-986999-m03 --driver=kvm2  --container-runtime=crio: (39.289805212s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-986999
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-986999: exit status 80 (196.641021ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-986999 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-986999-m03 already exists in multinode-986999-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-986999-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.34s)

                                                
                                    
x
+
TestScheduledStopUnix (109.61s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-887126 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-887126 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.066531449s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-887126 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-887126 -n scheduled-stop-887126
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-887126 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-887126 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-887126 -n scheduled-stop-887126
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-887126
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-887126 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-887126
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-887126: exit status 7 (64.028529ms)

                                                
                                                
-- stdout --
	scheduled-stop-887126
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-887126 -n scheduled-stop-887126
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-887126 -n scheduled-stop-887126: exit status 7 (64.892063ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-887126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-887126
--- PASS: TestScheduledStopUnix (109.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (199.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3631427595 start -p running-upgrade-706037 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3631427595 start -p running-upgrade-706037 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m44.215612572s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-706037 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-706037 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m31.999454331s)
helpers_test.go:175: Cleaning up "running-upgrade-706037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-706037
--- PASS: TestRunningBinaryUpgrade (199.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-009758 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-009758 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (81.477598ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-009758] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (90.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-009758 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-009758 --driver=kvm2  --container-runtime=crio: (1m30.324238004s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-009758 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (90.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-984053 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-984053 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (97.888514ms)

                                                
                                                
-- stdout --
	* [false-984053] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19446
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0814 17:17:19.008676   57159 out.go:291] Setting OutFile to fd 1 ...
	I0814 17:17:19.008784   57159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:17:19.008791   57159 out.go:304] Setting ErrFile to fd 2...
	I0814 17:17:19.008796   57159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0814 17:17:19.008965   57159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19446-13977/.minikube/bin
	I0814 17:17:19.009494   57159 out.go:298] Setting JSON to false
	I0814 17:17:19.010320   57159 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7183,"bootTime":1723648656,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0814 17:17:19.010376   57159 start.go:139] virtualization: kvm guest
	I0814 17:17:19.012415   57159 out.go:177] * [false-984053] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0814 17:17:19.013771   57159 out.go:177]   - MINIKUBE_LOCATION=19446
	I0814 17:17:19.013838   57159 notify.go:220] Checking for updates...
	I0814 17:17:19.016268   57159 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0814 17:17:19.017622   57159 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19446-13977/kubeconfig
	I0814 17:17:19.018837   57159 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19446-13977/.minikube
	I0814 17:17:19.019954   57159 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0814 17:17:19.020984   57159 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0814 17:17:19.022467   57159 config.go:182] Loaded profile config "NoKubernetes-009758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:17:19.022570   57159 config.go:182] Loaded profile config "force-systemd-env-090754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:17:19.022656   57159 config.go:182] Loaded profile config "offline-crio-972905": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0814 17:17:19.022724   57159 driver.go:392] Setting default libvirt URI to qemu:///system
	I0814 17:17:19.059406   57159 out.go:177] * Using the kvm2 driver based on user configuration
	I0814 17:17:19.060921   57159 start.go:297] selected driver: kvm2
	I0814 17:17:19.060942   57159 start.go:901] validating driver "kvm2" against <nil>
	I0814 17:17:19.060954   57159 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0814 17:17:19.062908   57159 out.go:177] 
	W0814 17:17:19.064197   57159 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0814 17:17:19.065399   57159 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-984053 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-984053

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-984053

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-984053

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-984053

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-984053

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-984053

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-984053

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-984053

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-984053

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-984053

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-984053

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-984053" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-984053" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-984053

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-984053"

                                                
                                                
----------------------- debugLogs end: false-984053 [took: 2.537725469s] --------------------------------
helpers_test.go:175: Cleaning up "false-984053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-984053
--- PASS: TestNetworkPlugins/group/false (2.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-009758 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-009758 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.716299617s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-009758 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-009758 status -o json: exit status 2 (244.510384ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-009758","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-009758
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (104.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.985093526 start -p stopped-upgrade-063007 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0814 17:19:12.530426   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.985093526 start -p stopped-upgrade-063007 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m2.904049869s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.985093526 -p stopped-upgrade-063007 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.985093526 -p stopped-upgrade-063007 stop: (1.406193452s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-063007 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-063007 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.904333956s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (104.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (45.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-009758 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0814 17:19:29.459937   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-009758 --no-kubernetes --driver=kvm2  --container-runtime=crio: (45.129372238s)
--- PASS: TestNoKubernetes/serial/Start (45.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-009758 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-009758 "sudo systemctl is-active --quiet service kubelet": exit status 1 (218.091073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (17.599548194s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.67325525s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-009758
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-009758: (1.293900287s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-009758 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-009758 --driver=kvm2  --container-runtime=crio: (23.967132283s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-063007
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-009758 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-009758 "sudo systemctl is-active --quiet service kubelet": exit status 1 (184.17ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestPause/serial/Start (59.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-255048 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-255048 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (59.163644833s)
--- PASS: TestPause/serial/Start (59.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (97.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0814 17:23:02.589185   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m37.557982933s)
--- PASS: TestNetworkPlugins/group/auto/Start (97.56s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-255048 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-255048 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (34.198221877s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.22s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-255048 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-255048 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-255048 --output=json --layout=cluster: exit status 2 (273.082192ms)

                                                
                                                
-- stdout --
	{"Name":"pause-255048","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-255048","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-255048 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.86s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-255048 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.86s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.82s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-255048 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.82s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.73s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m6.169207426s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-984053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-984053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6mlhf" [e48d7829-e07a-4318-bd66-1c649ae2a9b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6mlhf" [e48d7829-e07a-4318-bd66-1c649ae2a9b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005616441s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-984053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.732965641s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (100.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m40.34768873s)
--- PASS: TestNetworkPlugins/group/flannel/Start (100.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-zzhpm" [5de31652-f2a9-4afe-8c97-d86f297c8578] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003865471s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-984053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-984053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8fcp8" [d22dfa98-5dc5-4b54-b801-5fdab96770b9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8fcp8" [d22dfa98-5dc5-4b54-b801-5fdab96770b9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004261173s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-984053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (108.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m48.370596831s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (108.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-984053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-984053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2lc4w" [e1ceeba8-33da-45b3-9f64-3a904a00eeae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2lc4w" [e1ceeba8-33da-45b3-9f64-3a904a00eeae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005110564s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-984053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (58.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (58.481167591s)
--- PASS: TestNetworkPlugins/group/bridge/Start (58.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (110s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-984053 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m50.003902802s)
--- PASS: TestNetworkPlugins/group/calico/Start (110.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xh9vc" [73fd6536-7342-4e44-90da-187d0863df1f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004461933s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-984053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-984053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-426gf" [66a464bd-68e9-4196-b541-22359c9dd2c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-426gf" [66a464bd-68e9-4196-b541-22359c9dd2c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004449516s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-984053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-984053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-984053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8cdgv" [f89c6831-75a3-46ba-84da-904029b52489] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8cdgv" [f89c6831-75a3-46ba-84da-904029b52489] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.004709331s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-984053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-984053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-22mrd" [5149230c-b345-492e-bbe2-17e89e373ce3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-22mrd" [5149230c-b345-492e-bbe2-17e89e373ce3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.007561503s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-984053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-984053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-545149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-545149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m15.442016551s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-309673 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0814 17:28:02.589443   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-309673 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m27.594913999s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4xxc8" [9c5eb4f8-85fa-4c6f-8068-c0f7c54e28ee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004683053s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-984053 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-984053 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-frnwq" [cd70b4ff-0558-4e02-8d41-98868f85c825] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-frnwq" [cd70b4ff-0558-4e02-8d41-98868f85c825] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004844249s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-984053 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-984053 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-885666 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-885666 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m24.863958216s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-545149 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a2b9edf1-c481-45dc-b7fc-841f929af192] Pending
helpers_test.go:344: "busybox" [a2b9edf1-c481-45dc-b7fc-841f929af192] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a2b9edf1-c481-45dc-b7fc-841f929af192] Running
E0814 17:29:16.591510   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:16.597877   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:16.609316   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:16.630737   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:16.672175   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:16.753650   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:16.915003   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:17.237289   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:17.879389   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003871038s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-545149 exec busybox -- /bin/sh -c "ulimit -n"
E0814 17:29:19.161206   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-545149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-545149 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-309673 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [876cfcd4-be4c-422c-ad8f-ae89b22dd9b2] Pending
helpers_test.go:344: "busybox" [876cfcd4-be4c-422c-ad8f-ae89b22dd9b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0814 17:29:25.660443   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [876cfcd4-be4c-422c-ad8f-ae89b22dd9b2] Running
E0814 17:29:26.844774   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/auto-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:29:29.460203   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/functional-907634/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004003635s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-309673 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-309673 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-309673 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-885666 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c480594b-115b-458a-8fc4-cf59ff157be0] Pending
helpers_test.go:344: "busybox" [c480594b-115b-458a-8fc4-cf59ff157be0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c480594b-115b-458a-8fc4-cf59ff157be0] Running
E0814 17:30:18.924037   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/kindnet-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004871499s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-885666 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-885666 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-885666 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (679.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-545149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-545149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (11m19.211126505s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-545149 -n no-preload-545149
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (679.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (563.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-309673 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0814 17:32:06.057415   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:17.222075   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:19.605776   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:19.612139   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:19.623489   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:19.644854   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:19.686329   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:19.767818   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:19.929393   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:20.251095   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:20.892755   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:21.996609   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:22.002996   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:22.014376   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:22.035792   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:22.077179   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:22.158624   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:22.175020   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:22.320567   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:22.642045   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:23.284212   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:32:24.565731   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-309673 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m23.026793832s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-309673 -n embed-certs-309673
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (563.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (567.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-885666 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0814 17:33:00.583285   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:02.588729   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:02.972532   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:13.865498   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:13.871856   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:13.883179   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:13.904601   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:13.946001   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:14.027469   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:14.189110   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:14.510865   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:15.153165   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:16.435461   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:18.997735   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:24.119103   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:34.360493   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:39.143918   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/custom-flannel-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:41.545038   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:33:43.934731   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-885666 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m27.539794871s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-885666 -n default-k8s-diff-port-885666
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (567.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-505584 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-505584 --alsologtostderr -v=3: (1.279844218s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-505584 -n old-k8s-version-505584: exit status 7 (64.057596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-505584 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-471541 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0814 17:57:19.605835   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/enable-default-cni-984053/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:57:21.996372   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/bridge-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-471541 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (43.931367643s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-471541 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-471541 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-471541 --alsologtostderr -v=3: (10.374889795s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-471541 -n newest-cni-471541
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-471541 -n newest-cni-471541: exit status 7 (64.489652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-471541 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-471541 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0814 17:58:02.588588   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/addons-521895/client.crt: no such file or directory" logger="UnhandledError"
E0814 17:58:13.865053   21177 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19446-13977/.minikube/profiles/calico-984053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-471541 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (35.867427934s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-471541 -n newest-cni-471541
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-471541 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-471541 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-471541 --alsologtostderr -v=1: (1.755673624s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-471541 -n newest-cni-471541
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-471541 -n newest-cni-471541: exit status 2 (367.521665ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-471541 -n newest-cni-471541
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-471541 -n newest-cni-471541: exit status 2 (245.190465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-471541 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-471541 -n newest-cni-471541
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-471541 -n newest-cni-471541
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.06s)

                                                
                                    

Test skip (37/318)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
254 TestNetworkPlugins/group/kubenet 2.83
263 TestNetworkPlugins/group/cilium 3.54
277 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-984053 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-984053

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-984053

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-984053

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-984053

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-984053

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-984053

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-984053

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-984053

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-984053

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-984053

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-984053

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-984053" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-984053" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-984053

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-984053"

                                                
                                                
----------------------- debugLogs end: kubenet-984053 [took: 2.695103097s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-984053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-984053
--- SKIP: TestNetworkPlugins/group/kubenet (2.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-984053 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-984053" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-984053

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-984053" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-984053"

                                                
                                                
----------------------- debugLogs end: cilium-984053 [took: 3.415176667s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-984053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-984053
--- SKIP: TestNetworkPlugins/group/cilium (3.54s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-005029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-005029
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard